Test Report: KVM_Linux_crio 19423

                    
                      74b5ac7e1cfb7233a98e35daf2ce49e3acb00be2:2024-08-19:35861
                    
                

Test fail (17/222)

x
+
TestAddons/parallel/Ingress (156.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-966657 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-966657 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-966657 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a7779fae-ee4a-477e-8939-1538b08b9407] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a7779fae-ee4a-477e-8939-1538b08b9407] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.004018639s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-966657 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-966657 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.095695288s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-966657 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-966657 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.241
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-966657 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-966657 addons disable ingress-dns --alsologtostderr -v=1: (1.338132799s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-966657 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-966657 addons disable ingress --alsologtostderr -v=1: (7.702125994s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-966657 -n addons-966657
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-966657 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-966657 logs -n 25: (1.170484971s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-873673                                                                     | download-only-873673 | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC | 19 Aug 24 18:36 UTC |
	| delete  | -p download-only-087609                                                                     | download-only-087609 | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC | 19 Aug 24 18:36 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-219006 | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC |                     |
	|         | binary-mirror-219006                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40397                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-219006                                                                     | binary-mirror-219006 | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC | 19 Aug 24 18:36 UTC |
	| addons  | enable dashboard -p                                                                         | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC |                     |
	|         | addons-966657                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC |                     |
	|         | addons-966657                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-966657 --wait=true                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC | 19 Aug 24 18:39 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-966657 addons disable                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-966657 addons disable                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-966657 addons disable                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-966657 ip                                                                            | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	| addons  | addons-966657 addons disable                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	|         | addons-966657                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-966657 ssh curl -s                                                                   | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-966657 ssh cat                                                                       | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	|         | /opt/local-path-provisioner/pvc-bdc7ef98-d7dd-48c4-baf5-5803f9aa11e7_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-966657 addons disable                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	|         | -p addons-966657                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	|         | -p addons-966657                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-966657 addons disable                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:41 UTC | 19 Aug 24 18:41 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-966657 addons                                                                        | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:41 UTC | 19 Aug 24 18:41 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-966657 addons                                                                        | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:41 UTC | 19 Aug 24 18:41 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:41 UTC | 19 Aug 24 18:41 UTC |
	|         | addons-966657                                                                               |                      |         |         |                     |                     |
	| ip      | addons-966657 ip                                                                            | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:42 UTC | 19 Aug 24 18:42 UTC |
	| addons  | addons-966657 addons disable                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:42 UTC | 19 Aug 24 18:42 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-966657 addons disable                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:42 UTC | 19 Aug 24 18:43 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:36:40
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:36:40.661591  438797 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:36:40.661702  438797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:36:40.661709  438797 out.go:358] Setting ErrFile to fd 2...
	I0819 18:36:40.661716  438797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:36:40.661910  438797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 18:36:40.662603  438797 out.go:352] Setting JSON to false
	I0819 18:36:40.663577  438797 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8352,"bootTime":1724084249,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:36:40.663643  438797 start.go:139] virtualization: kvm guest
	I0819 18:36:40.665523  438797 out.go:177] * [addons-966657] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:36:40.666647  438797 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 18:36:40.666677  438797 notify.go:220] Checking for updates...
	I0819 18:36:40.668812  438797 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:36:40.669997  438797 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 18:36:40.671302  438797 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 18:36:40.672532  438797 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:36:40.673661  438797 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:36:40.674802  438797 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 18:36:40.707429  438797 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 18:36:40.708534  438797 start.go:297] selected driver: kvm2
	I0819 18:36:40.708562  438797 start.go:901] validating driver "kvm2" against <nil>
	I0819 18:36:40.708574  438797 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:36:40.709416  438797 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:36:40.709522  438797 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:36:40.725935  438797 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:36:40.726015  438797 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 18:36:40.726224  438797 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:36:40.726293  438797 cni.go:84] Creating CNI manager for ""
	I0819 18:36:40.726305  438797 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:36:40.726313  438797 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 18:36:40.726363  438797 start.go:340] cluster config:
	{Name:addons-966657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-966657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:36:40.726455  438797 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:36:40.728235  438797 out.go:177] * Starting "addons-966657" primary control-plane node in "addons-966657" cluster
	I0819 18:36:40.729379  438797 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:36:40.729431  438797 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:36:40.729442  438797 cache.go:56] Caching tarball of preloaded images
	I0819 18:36:40.729538  438797 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:36:40.729549  438797 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:36:40.729841  438797 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/config.json ...
	I0819 18:36:40.729862  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/config.json: {Name:mk2e4ced8a52cff2912bf206bbef7911649fae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:36:40.730012  438797 start.go:360] acquireMachinesLock for addons-966657: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:36:40.730058  438797 start.go:364] duration metric: took 32.114µs to acquireMachinesLock for "addons-966657"
	I0819 18:36:40.730078  438797 start.go:93] Provisioning new machine with config: &{Name:addons-966657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-966657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:36:40.730138  438797 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 18:36:40.731652  438797 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0819 18:36:40.731789  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:36:40.731816  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:36:40.746558  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33451
	I0819 18:36:40.747093  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:36:40.747669  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:36:40.747709  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:36:40.748099  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:36:40.748323  438797 main.go:141] libmachine: (addons-966657) Calling .GetMachineName
	I0819 18:36:40.748477  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:36:40.748622  438797 start.go:159] libmachine.API.Create for "addons-966657" (driver="kvm2")
	I0819 18:36:40.748650  438797 client.go:168] LocalClient.Create starting
	I0819 18:36:40.748693  438797 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem
	I0819 18:36:40.904320  438797 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem
	I0819 18:36:41.071567  438797 main.go:141] libmachine: Running pre-create checks...
	I0819 18:36:41.071599  438797 main.go:141] libmachine: (addons-966657) Calling .PreCreateCheck
	I0819 18:36:41.072189  438797 main.go:141] libmachine: (addons-966657) Calling .GetConfigRaw
	I0819 18:36:41.072709  438797 main.go:141] libmachine: Creating machine...
	I0819 18:36:41.072727  438797 main.go:141] libmachine: (addons-966657) Calling .Create
	I0819 18:36:41.072886  438797 main.go:141] libmachine: (addons-966657) Creating KVM machine...
	I0819 18:36:41.074232  438797 main.go:141] libmachine: (addons-966657) DBG | found existing default KVM network
	I0819 18:36:41.075035  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:41.074891  438819 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c10}
	I0819 18:36:41.075089  438797 main.go:141] libmachine: (addons-966657) DBG | created network xml: 
	I0819 18:36:41.075114  438797 main.go:141] libmachine: (addons-966657) DBG | <network>
	I0819 18:36:41.075125  438797 main.go:141] libmachine: (addons-966657) DBG |   <name>mk-addons-966657</name>
	I0819 18:36:41.075139  438797 main.go:141] libmachine: (addons-966657) DBG |   <dns enable='no'/>
	I0819 18:36:41.075148  438797 main.go:141] libmachine: (addons-966657) DBG |   
	I0819 18:36:41.075159  438797 main.go:141] libmachine: (addons-966657) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 18:36:41.075168  438797 main.go:141] libmachine: (addons-966657) DBG |     <dhcp>
	I0819 18:36:41.075178  438797 main.go:141] libmachine: (addons-966657) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 18:36:41.075188  438797 main.go:141] libmachine: (addons-966657) DBG |     </dhcp>
	I0819 18:36:41.075193  438797 main.go:141] libmachine: (addons-966657) DBG |   </ip>
	I0819 18:36:41.075233  438797 main.go:141] libmachine: (addons-966657) DBG |   
	I0819 18:36:41.075260  438797 main.go:141] libmachine: (addons-966657) DBG | </network>
	I0819 18:36:41.075321  438797 main.go:141] libmachine: (addons-966657) DBG | 
	I0819 18:36:41.080744  438797 main.go:141] libmachine: (addons-966657) DBG | trying to create private KVM network mk-addons-966657 192.168.39.0/24...
	I0819 18:36:41.153960  438797 main.go:141] libmachine: (addons-966657) DBG | private KVM network mk-addons-966657 192.168.39.0/24 created
	I0819 18:36:41.153996  438797 main.go:141] libmachine: (addons-966657) Setting up store path in /home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657 ...
	I0819 18:36:41.154018  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:41.153937  438819 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 18:36:41.154032  438797 main.go:141] libmachine: (addons-966657) Building disk image from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 18:36:41.154197  438797 main.go:141] libmachine: (addons-966657) Downloading /home/jenkins/minikube-integration/19423-430949/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 18:36:41.414175  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:41.413996  438819 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa...
	I0819 18:36:41.498202  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:41.498038  438819 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/addons-966657.rawdisk...
	I0819 18:36:41.498237  438797 main.go:141] libmachine: (addons-966657) DBG | Writing magic tar header
	I0819 18:36:41.498248  438797 main.go:141] libmachine: (addons-966657) DBG | Writing SSH key tar header
	I0819 18:36:41.498257  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:41.498154  438819 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657 ...
	I0819 18:36:41.498268  438797 main.go:141] libmachine: (addons-966657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657
	I0819 18:36:41.498338  438797 main.go:141] libmachine: (addons-966657) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657 (perms=drwx------)
	I0819 18:36:41.498366  438797 main.go:141] libmachine: (addons-966657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines
	I0819 18:36:41.498379  438797 main.go:141] libmachine: (addons-966657) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines (perms=drwxr-xr-x)
	I0819 18:36:41.498387  438797 main.go:141] libmachine: (addons-966657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 18:36:41.498399  438797 main.go:141] libmachine: (addons-966657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949
	I0819 18:36:41.498405  438797 main.go:141] libmachine: (addons-966657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 18:36:41.498414  438797 main.go:141] libmachine: (addons-966657) DBG | Checking permissions on dir: /home/jenkins
	I0819 18:36:41.498422  438797 main.go:141] libmachine: (addons-966657) DBG | Checking permissions on dir: /home
	I0819 18:36:41.498438  438797 main.go:141] libmachine: (addons-966657) DBG | Skipping /home - not owner
	I0819 18:36:41.498458  438797 main.go:141] libmachine: (addons-966657) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube (perms=drwxr-xr-x)
	I0819 18:36:41.498473  438797 main.go:141] libmachine: (addons-966657) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949 (perms=drwxrwxr-x)
	I0819 18:36:41.498486  438797 main.go:141] libmachine: (addons-966657) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 18:36:41.498497  438797 main.go:141] libmachine: (addons-966657) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 18:36:41.498502  438797 main.go:141] libmachine: (addons-966657) Creating domain...
	I0819 18:36:41.499540  438797 main.go:141] libmachine: (addons-966657) define libvirt domain using xml: 
	I0819 18:36:41.499570  438797 main.go:141] libmachine: (addons-966657) <domain type='kvm'>
	I0819 18:36:41.499583  438797 main.go:141] libmachine: (addons-966657)   <name>addons-966657</name>
	I0819 18:36:41.499592  438797 main.go:141] libmachine: (addons-966657)   <memory unit='MiB'>4000</memory>
	I0819 18:36:41.499608  438797 main.go:141] libmachine: (addons-966657)   <vcpu>2</vcpu>
	I0819 18:36:41.499620  438797 main.go:141] libmachine: (addons-966657)   <features>
	I0819 18:36:41.499656  438797 main.go:141] libmachine: (addons-966657)     <acpi/>
	I0819 18:36:41.499680  438797 main.go:141] libmachine: (addons-966657)     <apic/>
	I0819 18:36:41.499690  438797 main.go:141] libmachine: (addons-966657)     <pae/>
	I0819 18:36:41.499697  438797 main.go:141] libmachine: (addons-966657)     
	I0819 18:36:41.499706  438797 main.go:141] libmachine: (addons-966657)   </features>
	I0819 18:36:41.499718  438797 main.go:141] libmachine: (addons-966657)   <cpu mode='host-passthrough'>
	I0819 18:36:41.499727  438797 main.go:141] libmachine: (addons-966657)   
	I0819 18:36:41.499735  438797 main.go:141] libmachine: (addons-966657)   </cpu>
	I0819 18:36:41.499744  438797 main.go:141] libmachine: (addons-966657)   <os>
	I0819 18:36:41.499749  438797 main.go:141] libmachine: (addons-966657)     <type>hvm</type>
	I0819 18:36:41.499763  438797 main.go:141] libmachine: (addons-966657)     <boot dev='cdrom'/>
	I0819 18:36:41.499785  438797 main.go:141] libmachine: (addons-966657)     <boot dev='hd'/>
	I0819 18:36:41.499798  438797 main.go:141] libmachine: (addons-966657)     <bootmenu enable='no'/>
	I0819 18:36:41.499807  438797 main.go:141] libmachine: (addons-966657)   </os>
	I0819 18:36:41.499813  438797 main.go:141] libmachine: (addons-966657)   <devices>
	I0819 18:36:41.499823  438797 main.go:141] libmachine: (addons-966657)     <disk type='file' device='cdrom'>
	I0819 18:36:41.499833  438797 main.go:141] libmachine: (addons-966657)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/boot2docker.iso'/>
	I0819 18:36:41.499841  438797 main.go:141] libmachine: (addons-966657)       <target dev='hdc' bus='scsi'/>
	I0819 18:36:41.499849  438797 main.go:141] libmachine: (addons-966657)       <readonly/>
	I0819 18:36:41.499863  438797 main.go:141] libmachine: (addons-966657)     </disk>
	I0819 18:36:41.499876  438797 main.go:141] libmachine: (addons-966657)     <disk type='file' device='disk'>
	I0819 18:36:41.499889  438797 main.go:141] libmachine: (addons-966657)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 18:36:41.499903  438797 main.go:141] libmachine: (addons-966657)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/addons-966657.rawdisk'/>
	I0819 18:36:41.499910  438797 main.go:141] libmachine: (addons-966657)       <target dev='hda' bus='virtio'/>
	I0819 18:36:41.499916  438797 main.go:141] libmachine: (addons-966657)     </disk>
	I0819 18:36:41.499921  438797 main.go:141] libmachine: (addons-966657)     <interface type='network'>
	I0819 18:36:41.499929  438797 main.go:141] libmachine: (addons-966657)       <source network='mk-addons-966657'/>
	I0819 18:36:41.499940  438797 main.go:141] libmachine: (addons-966657)       <model type='virtio'/>
	I0819 18:36:41.499951  438797 main.go:141] libmachine: (addons-966657)     </interface>
	I0819 18:36:41.499963  438797 main.go:141] libmachine: (addons-966657)     <interface type='network'>
	I0819 18:36:41.499976  438797 main.go:141] libmachine: (addons-966657)       <source network='default'/>
	I0819 18:36:41.499983  438797 main.go:141] libmachine: (addons-966657)       <model type='virtio'/>
	I0819 18:36:41.499988  438797 main.go:141] libmachine: (addons-966657)     </interface>
	I0819 18:36:41.499999  438797 main.go:141] libmachine: (addons-966657)     <serial type='pty'>
	I0819 18:36:41.500006  438797 main.go:141] libmachine: (addons-966657)       <target port='0'/>
	I0819 18:36:41.500011  438797 main.go:141] libmachine: (addons-966657)     </serial>
	I0819 18:36:41.500025  438797 main.go:141] libmachine: (addons-966657)     <console type='pty'>
	I0819 18:36:41.500035  438797 main.go:141] libmachine: (addons-966657)       <target type='serial' port='0'/>
	I0819 18:36:41.500041  438797 main.go:141] libmachine: (addons-966657)     </console>
	I0819 18:36:41.500047  438797 main.go:141] libmachine: (addons-966657)     <rng model='virtio'>
	I0819 18:36:41.500056  438797 main.go:141] libmachine: (addons-966657)       <backend model='random'>/dev/random</backend>
	I0819 18:36:41.500063  438797 main.go:141] libmachine: (addons-966657)     </rng>
	I0819 18:36:41.500068  438797 main.go:141] libmachine: (addons-966657)     
	I0819 18:36:41.500075  438797 main.go:141] libmachine: (addons-966657)     
	I0819 18:36:41.500080  438797 main.go:141] libmachine: (addons-966657)   </devices>
	I0819 18:36:41.500087  438797 main.go:141] libmachine: (addons-966657) </domain>
	I0819 18:36:41.500091  438797 main.go:141] libmachine: (addons-966657) 
	I0819 18:36:41.504481  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:86:30:07 in network default
	I0819 18:36:41.505011  438797 main.go:141] libmachine: (addons-966657) Ensuring networks are active...
	I0819 18:36:41.505036  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:41.505728  438797 main.go:141] libmachine: (addons-966657) Ensuring network default is active
	I0819 18:36:41.505991  438797 main.go:141] libmachine: (addons-966657) Ensuring network mk-addons-966657 is active
	I0819 18:36:41.506421  438797 main.go:141] libmachine: (addons-966657) Getting domain xml...
	I0819 18:36:41.507078  438797 main.go:141] libmachine: (addons-966657) Creating domain...
	I0819 18:36:42.742586  438797 main.go:141] libmachine: (addons-966657) Waiting to get IP...
	I0819 18:36:42.743506  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:42.743982  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:42.744119  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:42.744060  438819 retry.go:31] will retry after 235.975158ms: waiting for machine to come up
	I0819 18:36:42.981733  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:42.982174  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:42.982200  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:42.982139  438819 retry.go:31] will retry after 356.596416ms: waiting for machine to come up
	I0819 18:36:43.340806  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:43.341250  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:43.341279  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:43.341194  438819 retry.go:31] will retry after 480.923964ms: waiting for machine to come up
	I0819 18:36:43.823921  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:43.824372  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:43.824394  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:43.824337  438819 retry.go:31] will retry after 563.24209ms: waiting for machine to come up
	I0819 18:36:44.389101  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:44.389625  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:44.389658  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:44.389589  438819 retry.go:31] will retry after 672.851827ms: waiting for machine to come up
	I0819 18:36:45.064597  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:45.065153  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:45.065184  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:45.065090  438819 retry.go:31] will retry after 736.246184ms: waiting for machine to come up
	I0819 18:36:45.803008  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:45.803518  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:45.803553  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:45.803431  438819 retry.go:31] will retry after 1.156596743s: waiting for machine to come up
	I0819 18:36:46.962034  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:46.962383  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:46.962405  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:46.962339  438819 retry.go:31] will retry after 1.255605784s: waiting for machine to come up
	I0819 18:36:48.219864  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:48.220393  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:48.220422  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:48.220343  438819 retry.go:31] will retry after 1.84715451s: waiting for machine to come up
	I0819 18:36:50.070606  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:50.071095  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:50.071130  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:50.071033  438819 retry.go:31] will retry after 1.71879158s: waiting for machine to come up
	I0819 18:36:51.791402  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:51.791849  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:51.791878  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:51.791806  438819 retry.go:31] will retry after 2.519575936s: waiting for machine to come up
	I0819 18:36:54.314700  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:54.315062  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:54.315090  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:54.315020  438819 retry.go:31] will retry after 2.837406053s: waiting for machine to come up
	I0819 18:36:57.154690  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:57.155142  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:57.155167  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:57.155087  438819 retry.go:31] will retry after 4.457278559s: waiting for machine to come up
	I0819 18:37:01.614178  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:01.614650  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has current primary IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:01.614673  438797 main.go:141] libmachine: (addons-966657) Found IP for machine: 192.168.39.241
	I0819 18:37:01.614716  438797 main.go:141] libmachine: (addons-966657) Reserving static IP address...
	I0819 18:37:01.615156  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find host DHCP lease matching {name: "addons-966657", mac: "52:54:00:eb:04:e6", ip: "192.168.39.241"} in network mk-addons-966657
	I0819 18:37:01.702986  438797 main.go:141] libmachine: (addons-966657) DBG | Getting to WaitForSSH function...
	I0819 18:37:01.703028  438797 main.go:141] libmachine: (addons-966657) Reserved static IP address: 192.168.39.241
	I0819 18:37:01.703042  438797 main.go:141] libmachine: (addons-966657) Waiting for SSH to be available...
	I0819 18:37:01.705820  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:01.706326  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:minikube Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:01.706360  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:01.706540  438797 main.go:141] libmachine: (addons-966657) DBG | Using SSH client type: external
	I0819 18:37:01.706567  438797 main.go:141] libmachine: (addons-966657) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa (-rw-------)
	I0819 18:37:01.706600  438797 main.go:141] libmachine: (addons-966657) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:37:01.706618  438797 main.go:141] libmachine: (addons-966657) DBG | About to run SSH command:
	I0819 18:37:01.706660  438797 main.go:141] libmachine: (addons-966657) DBG | exit 0
	I0819 18:37:01.833315  438797 main.go:141] libmachine: (addons-966657) DBG | SSH cmd err, output: <nil>: 
	I0819 18:37:01.833582  438797 main.go:141] libmachine: (addons-966657) KVM machine creation complete!
	I0819 18:37:01.834072  438797 main.go:141] libmachine: (addons-966657) Calling .GetConfigRaw
	I0819 18:37:01.834690  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:01.834972  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:01.835175  438797 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 18:37:01.835196  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:01.836558  438797 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 18:37:01.836576  438797 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 18:37:01.836583  438797 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 18:37:01.836589  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:01.839730  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:01.840744  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:01.840775  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:01.841039  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:01.841287  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:01.841497  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:01.841674  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:01.841852  438797 main.go:141] libmachine: Using SSH client type: native
	I0819 18:37:01.842102  438797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0819 18:37:01.842114  438797 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 18:37:01.952493  438797 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:37:01.952526  438797 main.go:141] libmachine: Detecting the provisioner...
	I0819 18:37:01.952546  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:01.955740  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:01.956055  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:01.956081  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:01.956205  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:01.956435  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:01.956617  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:01.956793  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:01.957044  438797 main.go:141] libmachine: Using SSH client type: native
	I0819 18:37:01.957263  438797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0819 18:37:01.957274  438797 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 18:37:02.069852  438797 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 18:37:02.069941  438797 main.go:141] libmachine: found compatible host: buildroot
	I0819 18:37:02.069955  438797 main.go:141] libmachine: Provisioning with buildroot...
	I0819 18:37:02.069967  438797 main.go:141] libmachine: (addons-966657) Calling .GetMachineName
	I0819 18:37:02.070247  438797 buildroot.go:166] provisioning hostname "addons-966657"
	I0819 18:37:02.070274  438797 main.go:141] libmachine: (addons-966657) Calling .GetMachineName
	I0819 18:37:02.070478  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:02.073216  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.073694  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.073740  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.073959  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:02.074172  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.074347  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.074507  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:02.074690  438797 main.go:141] libmachine: Using SSH client type: native
	I0819 18:37:02.074870  438797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0819 18:37:02.074882  438797 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-966657 && echo "addons-966657" | sudo tee /etc/hostname
	I0819 18:37:02.198933  438797 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-966657
	
	I0819 18:37:02.198972  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:02.201983  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.202429  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.202464  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.202665  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:02.202908  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.203097  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.203291  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:02.203462  438797 main.go:141] libmachine: Using SSH client type: native
	I0819 18:37:02.203656  438797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0819 18:37:02.203677  438797 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-966657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-966657/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-966657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:37:02.322326  438797 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:37:02.322357  438797 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 18:37:02.322400  438797 buildroot.go:174] setting up certificates
	I0819 18:37:02.322412  438797 provision.go:84] configureAuth start
	I0819 18:37:02.322425  438797 main.go:141] libmachine: (addons-966657) Calling .GetMachineName
	I0819 18:37:02.322751  438797 main.go:141] libmachine: (addons-966657) Calling .GetIP
	I0819 18:37:02.325480  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.325837  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.325866  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.326032  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:02.328593  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.329006  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.329036  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.329193  438797 provision.go:143] copyHostCerts
	I0819 18:37:02.329278  438797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 18:37:02.329438  438797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 18:37:02.329537  438797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 18:37:02.329607  438797 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.addons-966657 san=[127.0.0.1 192.168.39.241 addons-966657 localhost minikube]
	I0819 18:37:02.419609  438797 provision.go:177] copyRemoteCerts
	I0819 18:37:02.419676  438797 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:37:02.419704  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:02.422454  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.422795  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.422823  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.422996  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:02.423208  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.423420  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:02.423629  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:02.507304  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 18:37:02.531928  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 18:37:02.555925  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 18:37:02.587928  438797 provision.go:87] duration metric: took 265.498055ms to configureAuth
	I0819 18:37:02.587964  438797 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:37:02.588137  438797 config.go:182] Loaded profile config "addons-966657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:37:02.588228  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:02.590947  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.591272  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.591304  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.591502  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:02.591751  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.591967  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.592101  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:02.592274  438797 main.go:141] libmachine: Using SSH client type: native
	I0819 18:37:02.592440  438797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0819 18:37:02.592455  438797 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:37:02.857116  438797 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:37:02.857172  438797 main.go:141] libmachine: Checking connection to Docker...
	I0819 18:37:02.857182  438797 main.go:141] libmachine: (addons-966657) Calling .GetURL
	I0819 18:37:02.858468  438797 main.go:141] libmachine: (addons-966657) DBG | Using libvirt version 6000000
	I0819 18:37:02.860751  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.861066  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.861089  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.861276  438797 main.go:141] libmachine: Docker is up and running!
	I0819 18:37:02.861292  438797 main.go:141] libmachine: Reticulating splines...
	I0819 18:37:02.861302  438797 client.go:171] duration metric: took 22.112640246s to LocalClient.Create
	I0819 18:37:02.861335  438797 start.go:167] duration metric: took 22.112712107s to libmachine.API.Create "addons-966657"
	I0819 18:37:02.861358  438797 start.go:293] postStartSetup for "addons-966657" (driver="kvm2")
	I0819 18:37:02.861375  438797 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:37:02.861397  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:02.861651  438797 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:37:02.861676  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:02.863904  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.864206  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.864229  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.864429  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:02.864618  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.864794  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:02.864923  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:02.951514  438797 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:37:02.955961  438797 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:37:02.955999  438797 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 18:37:02.956090  438797 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 18:37:02.956119  438797 start.go:296] duration metric: took 94.752426ms for postStartSetup
	I0819 18:37:02.956160  438797 main.go:141] libmachine: (addons-966657) Calling .GetConfigRaw
	I0819 18:37:02.956773  438797 main.go:141] libmachine: (addons-966657) Calling .GetIP
	I0819 18:37:02.959255  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.959593  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.959629  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.959883  438797 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/config.json ...
	I0819 18:37:02.960089  438797 start.go:128] duration metric: took 22.229939651s to createHost
	I0819 18:37:02.960114  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:02.962473  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.962819  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.962854  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.963088  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:02.963307  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.963488  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.963612  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:02.963809  438797 main.go:141] libmachine: Using SSH client type: native
	I0819 18:37:02.963999  438797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0819 18:37:02.964013  438797 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:37:03.074030  438797 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724092623.049141146
	
	I0819 18:37:03.074085  438797 fix.go:216] guest clock: 1724092623.049141146
	I0819 18:37:03.074094  438797 fix.go:229] Guest: 2024-08-19 18:37:03.049141146 +0000 UTC Remote: 2024-08-19 18:37:02.960101488 +0000 UTC m=+22.334685821 (delta=89.039658ms)
	I0819 18:37:03.074117  438797 fix.go:200] guest clock delta is within tolerance: 89.039658ms
	I0819 18:37:03.074122  438797 start.go:83] releasing machines lock for "addons-966657", held for 22.344053258s
	I0819 18:37:03.074144  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:03.074436  438797 main.go:141] libmachine: (addons-966657) Calling .GetIP
	I0819 18:37:03.077173  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:03.077527  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:03.077556  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:03.077725  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:03.078331  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:03.078485  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:03.078596  438797 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:37:03.078657  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:03.078661  438797 ssh_runner.go:195] Run: cat /version.json
	I0819 18:37:03.078679  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:03.081128  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:03.081184  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:03.081446  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:03.081473  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:03.081602  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:03.081629  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:03.081861  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:03.081865  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:03.082075  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:03.082103  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:03.082227  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:03.082233  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:03.082416  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:03.082424  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:03.183829  438797 ssh_runner.go:195] Run: systemctl --version
	I0819 18:37:03.189838  438797 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:37:03.345484  438797 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 18:37:03.351671  438797 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:37:03.351750  438797 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:37:03.367483  438797 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 18:37:03.367521  438797 start.go:495] detecting cgroup driver to use...
	I0819 18:37:03.367603  438797 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:37:03.384104  438797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:37:03.398255  438797 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:37:03.398338  438797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:37:03.411990  438797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:37:03.426022  438797 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:37:03.541519  438797 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:37:03.685317  438797 docker.go:233] disabling docker service ...
	I0819 18:37:03.685404  438797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:37:03.699532  438797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:37:03.712621  438797 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:37:03.847936  438797 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:37:03.956128  438797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:37:03.970563  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:37:03.988777  438797 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:37:03.988837  438797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:03.999707  438797 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:37:03.999784  438797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:04.010938  438797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:04.023277  438797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:04.035514  438797 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:37:04.047925  438797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:04.059090  438797 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:04.076362  438797 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:04.087873  438797 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:37:04.099496  438797 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 18:37:04.099574  438797 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 18:37:04.112666  438797 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:37:04.122993  438797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:37:04.232571  438797 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:37:04.369907  438797 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:37:04.370008  438797 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:37:04.374615  438797 start.go:563] Will wait 60s for crictl version
	I0819 18:37:04.374686  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:37:04.378628  438797 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:37:04.414542  438797 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:37:04.414637  438797 ssh_runner.go:195] Run: crio --version
	I0819 18:37:04.442323  438797 ssh_runner.go:195] Run: crio --version
	I0819 18:37:04.472624  438797 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:37:04.474056  438797 main.go:141] libmachine: (addons-966657) Calling .GetIP
	I0819 18:37:04.476724  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:04.477038  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:04.477069  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:04.477329  438797 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:37:04.481395  438797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:37:04.493811  438797 kubeadm.go:883] updating cluster {Name:addons-966657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-966657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:37:04.493933  438797 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:37:04.493979  438797 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:37:04.525512  438797 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 18:37:04.525592  438797 ssh_runner.go:195] Run: which lz4
	I0819 18:37:04.529306  438797 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 18:37:04.533344  438797 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 18:37:04.533379  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 18:37:05.715255  438797 crio.go:462] duration metric: took 1.185978933s to copy over tarball
	I0819 18:37:05.715346  438797 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 18:37:07.932024  438797 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.216645114s)
	I0819 18:37:07.932053  438797 crio.go:469] duration metric: took 2.216765379s to extract the tarball
	I0819 18:37:07.932061  438797 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 18:37:07.968177  438797 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:37:08.015293  438797 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:37:08.015328  438797 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:37:08.015337  438797 kubeadm.go:934] updating node { 192.168.39.241 8443 v1.31.0 crio true true} ...
	I0819 18:37:08.015484  438797 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-966657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-966657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:37:08.015556  438797 ssh_runner.go:195] Run: crio config
	I0819 18:37:08.066758  438797 cni.go:84] Creating CNI manager for ""
	I0819 18:37:08.066781  438797 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:37:08.066792  438797 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:37:08.066842  438797 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.241 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-966657 NodeName:addons-966657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:37:08.066978  438797 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-966657"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.241
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.241"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:37:08.067045  438797 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:37:08.077256  438797 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:37:08.077332  438797 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 18:37:08.087129  438797 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 18:37:08.105384  438797 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:37:08.122473  438797 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0819 18:37:08.140514  438797 ssh_runner.go:195] Run: grep 192.168.39.241	control-plane.minikube.internal$ /etc/hosts
	I0819 18:37:08.144383  438797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.241	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:37:08.157348  438797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:37:08.280174  438797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:37:08.298854  438797 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657 for IP: 192.168.39.241
	I0819 18:37:08.298883  438797 certs.go:194] generating shared ca certs ...
	I0819 18:37:08.298901  438797 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:08.299069  438797 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 18:37:08.375620  438797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt ...
	I0819 18:37:08.375653  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt: {Name:mk16115d9abdf6effc0b1430804b3178a06d38df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:08.375862  438797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key ...
	I0819 18:37:08.375878  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key: {Name:mk48d912f99f1dc36b0b0fc6644cc62336d64ef6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:08.375975  438797 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 18:37:08.479339  438797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt ...
	I0819 18:37:08.479373  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt: {Name:mk61277915c60e1ebd7acefaf83d0042478e62e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:08.479559  438797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key ...
	I0819 18:37:08.479577  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key: {Name:mk60e93c8cfd9bbe3e8238ba39bd3a556bacda04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:08.479673  438797 certs.go:256] generating profile certs ...
	I0819 18:37:08.479753  438797 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.key
	I0819 18:37:08.479778  438797 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt with IP's: []
	I0819 18:37:08.642672  438797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt ...
	I0819 18:37:08.642710  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: {Name:mk49593965b499436279bde5737bb16c84d1bef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:08.642901  438797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.key ...
	I0819 18:37:08.642918  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.key: {Name:mk70a6df7d74775bcc1baec44b78c3b9c382e131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:08.643011  438797 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.key.7a5737c2
	I0819 18:37:08.643040  438797 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.crt.7a5737c2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.241]
	I0819 18:37:08.730968  438797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.crt.7a5737c2 ...
	I0819 18:37:08.731006  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.crt.7a5737c2: {Name:mkcfc0f21e6a1bccadf908c525fafb3fe69fe05e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:08.731187  438797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.key.7a5737c2 ...
	I0819 18:37:08.731207  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.key.7a5737c2: {Name:mkb0b9ad344eb1dd46d88fbcf8123d3bc6e9982e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:08.731308  438797 certs.go:381] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.crt.7a5737c2 -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.crt
	I0819 18:37:08.731405  438797 certs.go:385] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.key.7a5737c2 -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.key
	I0819 18:37:08.731468  438797 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/proxy-client.key
	I0819 18:37:08.731501  438797 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/proxy-client.crt with IP's: []
	I0819 18:37:09.118243  438797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/proxy-client.crt ...
	I0819 18:37:09.118285  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/proxy-client.crt: {Name:mk99fe35ee7c9c1c7e68245e075497747f40bb1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:09.118464  438797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/proxy-client.key ...
	I0819 18:37:09.118477  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/proxy-client.key: {Name:mk53d2151bda86b6731068605674c5a506741333 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:09.118653  438797 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 18:37:09.118691  438797 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 18:37:09.118716  438797 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:37:09.118741  438797 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 18:37:09.119404  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:37:09.144039  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:37:09.167932  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:37:09.192440  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 18:37:09.216892  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 18:37:09.241222  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 18:37:09.266058  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:37:09.290620  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 18:37:09.317796  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:37:09.343870  438797 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:37:09.362924  438797 ssh_runner.go:195] Run: openssl version
	I0819 18:37:09.368884  438797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:37:09.381403  438797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:37:09.386113  438797 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:37:09.386185  438797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:37:09.392258  438797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:37:09.404793  438797 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:37:09.409163  438797 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 18:37:09.409232  438797 kubeadm.go:392] StartCluster: {Name:addons-966657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-966657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:37:09.409339  438797 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:37:09.409440  438797 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:37:09.448288  438797 cri.go:89] found id: ""
	I0819 18:37:09.448376  438797 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 18:37:09.458440  438797 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:37:09.468361  438797 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:37:09.480946  438797 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:37:09.480971  438797 kubeadm.go:157] found existing configuration files:
	
	I0819 18:37:09.481036  438797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:37:09.490669  438797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:37:09.490756  438797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:37:09.500169  438797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:37:09.509856  438797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:37:09.509936  438797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:37:09.519386  438797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:37:09.528628  438797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:37:09.528699  438797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:37:09.539265  438797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:37:09.548823  438797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:37:09.548907  438797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:37:09.558609  438797 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:37:09.603506  438797 kubeadm.go:310] W0819 18:37:09.585815     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:37:09.604362  438797 kubeadm.go:310] W0819 18:37:09.587028     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:37:09.720594  438797 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:37:19.299154  438797 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 18:37:19.299239  438797 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:37:19.299345  438797 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:37:19.299490  438797 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:37:19.299647  438797 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 18:37:19.299748  438797 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:37:19.301247  438797 out.go:235]   - Generating certificates and keys ...
	I0819 18:37:19.301348  438797 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:37:19.301414  438797 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:37:19.301509  438797 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 18:37:19.301586  438797 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 18:37:19.301675  438797 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 18:37:19.301753  438797 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 18:37:19.301816  438797 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 18:37:19.301917  438797 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-966657 localhost] and IPs [192.168.39.241 127.0.0.1 ::1]
	I0819 18:37:19.301962  438797 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 18:37:19.302072  438797 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-966657 localhost] and IPs [192.168.39.241 127.0.0.1 ::1]
	I0819 18:37:19.302134  438797 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 18:37:19.302188  438797 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 18:37:19.302233  438797 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 18:37:19.302280  438797 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:37:19.302323  438797 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:37:19.302372  438797 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 18:37:19.302417  438797 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:37:19.302472  438797 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:37:19.302522  438797 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:37:19.302597  438797 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:37:19.302657  438797 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:37:19.304027  438797 out.go:235]   - Booting up control plane ...
	I0819 18:37:19.304143  438797 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:37:19.304231  438797 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:37:19.304339  438797 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:37:19.304471  438797 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:37:19.304575  438797 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:37:19.304638  438797 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:37:19.304814  438797 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 18:37:19.304957  438797 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 18:37:19.305038  438797 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.860845ms
	I0819 18:37:19.305120  438797 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 18:37:19.305199  438797 kubeadm.go:310] [api-check] The API server is healthy after 5.001924893s
	I0819 18:37:19.305290  438797 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 18:37:19.305413  438797 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 18:37:19.305513  438797 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 18:37:19.305713  438797 kubeadm.go:310] [mark-control-plane] Marking the node addons-966657 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 18:37:19.305770  438797 kubeadm.go:310] [bootstrap-token] Using token: nmfv8j.mc6x4vdc2focxr3m
	I0819 18:37:19.308225  438797 out.go:235]   - Configuring RBAC rules ...
	I0819 18:37:19.308357  438797 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 18:37:19.308431  438797 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 18:37:19.308558  438797 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 18:37:19.308665  438797 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 18:37:19.308767  438797 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 18:37:19.308838  438797 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 18:37:19.308955  438797 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 18:37:19.309028  438797 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 18:37:19.309101  438797 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 18:37:19.309112  438797 kubeadm.go:310] 
	I0819 18:37:19.309206  438797 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 18:37:19.309217  438797 kubeadm.go:310] 
	I0819 18:37:19.309328  438797 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 18:37:19.309337  438797 kubeadm.go:310] 
	I0819 18:37:19.309367  438797 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 18:37:19.309425  438797 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 18:37:19.309477  438797 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 18:37:19.309484  438797 kubeadm.go:310] 
	I0819 18:37:19.309529  438797 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 18:37:19.309538  438797 kubeadm.go:310] 
	I0819 18:37:19.309584  438797 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 18:37:19.309591  438797 kubeadm.go:310] 
	I0819 18:37:19.309646  438797 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 18:37:19.309752  438797 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 18:37:19.309838  438797 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 18:37:19.309851  438797 kubeadm.go:310] 
	I0819 18:37:19.309955  438797 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 18:37:19.310042  438797 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 18:37:19.310051  438797 kubeadm.go:310] 
	I0819 18:37:19.310127  438797 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nmfv8j.mc6x4vdc2focxr3m \
	I0819 18:37:19.310221  438797 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f \
	I0819 18:37:19.310243  438797 kubeadm.go:310] 	--control-plane 
	I0819 18:37:19.310249  438797 kubeadm.go:310] 
	I0819 18:37:19.310318  438797 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 18:37:19.310326  438797 kubeadm.go:310] 
	I0819 18:37:19.310422  438797 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nmfv8j.mc6x4vdc2focxr3m \
	I0819 18:37:19.310563  438797 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f 
	I0819 18:37:19.310583  438797 cni.go:84] Creating CNI manager for ""
	I0819 18:37:19.310596  438797 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:37:19.312262  438797 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 18:37:19.313572  438797 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 18:37:19.323960  438797 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 18:37:19.342499  438797 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 18:37:19.342591  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:19.342614  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-966657 minikube.k8s.io/updated_at=2024_08_19T18_37_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8 minikube.k8s.io/name=addons-966657 minikube.k8s.io/primary=true
	I0819 18:37:19.387374  438797 ops.go:34] apiserver oom_adj: -16
	I0819 18:37:19.512794  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:20.013865  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:20.513875  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:21.013402  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:21.513013  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:22.013750  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:22.513584  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:23.012982  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:23.513614  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:24.013470  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:24.124985  438797 kubeadm.go:1113] duration metric: took 4.782464375s to wait for elevateKubeSystemPrivileges
	I0819 18:37:24.125041  438797 kubeadm.go:394] duration metric: took 14.715818031s to StartCluster
	I0819 18:37:24.125065  438797 settings.go:142] acquiring lock: {Name:mkc71def6be9966e5f008a22161bf3ed26f482b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:24.125242  438797 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 18:37:24.125675  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/kubeconfig: {Name:mk1ae3aa741bb2460fc0d48f80867b991b4a0677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:24.125914  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 18:37:24.125945  438797 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:37:24.126031  438797 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 18:37:24.126166  438797 config.go:182] Loaded profile config "addons-966657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:37:24.126169  438797 addons.go:69] Setting default-storageclass=true in profile "addons-966657"
	I0819 18:37:24.126185  438797 addons.go:69] Setting helm-tiller=true in profile "addons-966657"
	I0819 18:37:24.126202  438797 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-966657"
	I0819 18:37:24.126213  438797 addons.go:234] Setting addon helm-tiller=true in "addons-966657"
	I0819 18:37:24.126173  438797 addons.go:69] Setting yakd=true in profile "addons-966657"
	I0819 18:37:24.126221  438797 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-966657"
	I0819 18:37:24.126228  438797 addons.go:69] Setting cloud-spanner=true in profile "addons-966657"
	I0819 18:37:24.126239  438797 addons.go:234] Setting addon yakd=true in "addons-966657"
	I0819 18:37:24.126245  438797 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-966657"
	I0819 18:37:24.126252  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.126262  438797 addons.go:234] Setting addon cloud-spanner=true in "addons-966657"
	I0819 18:37:24.126270  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.126279  438797 addons.go:69] Setting ingress=true in profile "addons-966657"
	I0819 18:37:24.126295  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.126304  438797 addons.go:69] Setting ingress-dns=true in profile "addons-966657"
	I0819 18:37:24.126321  438797 addons.go:234] Setting addon ingress-dns=true in "addons-966657"
	I0819 18:37:24.126348  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.126180  438797 addons.go:69] Setting gcp-auth=true in profile "addons-966657"
	I0819 18:37:24.126405  438797 mustload.go:65] Loading cluster: addons-966657
	I0819 18:37:24.126565  438797 config.go:182] Loaded profile config "addons-966657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:37:24.126700  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.126731  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.126746  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.126760  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.126768  438797 addons.go:69] Setting inspektor-gadget=true in profile "addons-966657"
	I0819 18:37:24.126779  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.126789  438797 addons.go:234] Setting addon inspektor-gadget=true in "addons-966657"
	I0819 18:37:24.126793  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.126810  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.126274  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.126919  438797 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-966657"
	I0819 18:37:24.126964  438797 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-966657"
	I0819 18:37:24.126761  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.126970  438797 addons.go:69] Setting metrics-server=true in profile "addons-966657"
	I0819 18:37:24.126999  438797 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-966657"
	I0819 18:37:24.127013  438797 addons.go:234] Setting addon metrics-server=true in "addons-966657"
	I0819 18:37:24.127022  438797 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-966657"
	I0819 18:37:24.126991  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.127154  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.127180  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.127225  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.127242  438797 addons.go:69] Setting volumesnapshots=true in profile "addons-966657"
	I0819 18:37:24.127249  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.127262  438797 addons.go:234] Setting addon volumesnapshots=true in "addons-966657"
	I0819 18:37:24.127269  438797 addons.go:69] Setting volcano=true in profile "addons-966657"
	I0819 18:37:24.127284  438797 addons.go:234] Setting addon volcano=true in "addons-966657"
	I0819 18:37:24.126748  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.127294  438797 addons.go:69] Setting registry=true in profile "addons-966657"
	I0819 18:37:24.127295  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.127312  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.127315  438797 addons.go:234] Setting addon registry=true in "addons-966657"
	I0819 18:37:24.126911  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.127337  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.127364  438797 addons.go:69] Setting storage-provisioner=true in profile "addons-966657"
	I0819 18:37:24.127382  438797 addons.go:234] Setting addon storage-provisioner=true in "addons-966657"
	I0819 18:37:24.126299  438797 addons.go:234] Setting addon ingress=true in "addons-966657"
	I0819 18:37:24.127413  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.127426  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.127442  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.127472  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.127482  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.127543  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.127568  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.127854  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.127869  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.128178  438797 out.go:177] * Verifying Kubernetes components...
	I0819 18:37:24.128203  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.128259  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.128283  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.128217  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.128398  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.128740  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.128794  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.133630  438797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:37:24.148700  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36425
	I0819 18:37:24.149071  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45055
	I0819 18:37:24.149740  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.150323  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.149879  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.150091  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.150405  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.150824  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.151061  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.151085  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.151600  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.151631  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.151694  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.152323  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.152373  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.152848  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.153443  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.153498  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.157677  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.160013  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.160473  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.160511  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.167736  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38861
	I0819 18:37:24.168592  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.169260  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.169318  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.169720  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.170304  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.170379  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.175789  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41173
	I0819 18:37:24.176135  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39677
	I0819 18:37:24.176576  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.176728  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.177250  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.177281  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.177391  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.177470  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.177739  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.177883  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.177979  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.178495  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.178544  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.180035  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41365
	I0819 18:37:24.182281  438797 addons.go:234] Setting addon default-storageclass=true in "addons-966657"
	I0819 18:37:24.182347  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.182738  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.182778  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.183443  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.183554  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0819 18:37:24.184009  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34319
	I0819 18:37:24.184371  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.184854  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.184876  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.185042  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.185067  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.185390  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.185450  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.186016  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.186070  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.186718  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.186771  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.187185  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33473
	I0819 18:37:24.189553  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.190222  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.190246  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.190691  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.191310  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.191359  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.193629  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.194326  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.194353  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.194774  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.195040  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.196118  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I0819 18:37:24.196690  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.197307  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.197325  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.198167  438797 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-966657"
	I0819 18:37:24.198216  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.198601  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.198656  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.199716  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38403
	I0819 18:37:24.199886  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.200181  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.200511  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.200571  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.201908  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.201929  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.202537  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.203183  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.203235  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.203638  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42631
	I0819 18:37:24.204247  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.204932  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.204957  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.205512  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.206083  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.206113  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.215728  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42997
	I0819 18:37:24.216039  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34173
	I0819 18:37:24.218402  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.218558  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0819 18:37:24.219230  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.219258  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.219635  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.220227  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.220276  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.220989  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.221635  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.221665  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.222070  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.222305  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.224145  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41749
	I0819 18:37:24.224165  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.224835  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.224864  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.225283  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.225509  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.226779  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36567
	I0819 18:37:24.227408  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36527
	I0819 18:37:24.227993  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.228123  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.229162  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.229183  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.229625  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.229688  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.229951  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.230265  438797 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 18:37:24.230306  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.230331  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.231305  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.231544  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.232112  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.232704  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41051
	I0819 18:37:24.233256  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.233503  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.233666  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.233828  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.233845  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.234121  438797 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 18:37:24.234624  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.235187  438797 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 18:37:24.235327  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.235366  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.235652  438797 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 18:37:24.235678  438797 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 18:37:24.235701  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.236330  438797 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 18:37:24.237178  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.237203  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.237347  438797 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 18:37:24.237984  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.238334  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.238487  438797 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 18:37:24.238510  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 18:37:24.238531  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.239316  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35741
	I0819 18:37:24.239654  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.239671  438797 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 18:37:24.239984  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.240086  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39723
	I0819 18:37:24.240304  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.240328  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.241810  438797 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 18:37:24.242998  438797 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 18:37:24.243373  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.243406  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.243431  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.243478  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.243495  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.243523  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.243803  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.244193  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.244199  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.244210  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.244217  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.244266  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.244436  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.244514  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.244776  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.244780  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38023
	I0819 18:37:24.244826  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.244870  438797 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 18:37:24.244973  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.245079  438797 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0819 18:37:24.245697  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.245743  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.245757  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.246092  438797 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0819 18:37:24.246113  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0819 18:37:24.246133  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.246452  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.246888  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39809
	I0819 18:37:24.246941  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46275
	I0819 18:37:24.247013  438797 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 18:37:24.247334  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.248133  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.248196  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:24.248222  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:24.248631  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.248637  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:24.248669  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:24.248678  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:24.248686  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:24.248694  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:24.249230  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.249254  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.249759  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:24.249775  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	W0819 18:37:24.249852  438797 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0819 18:37:24.249988  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45431
	I0819 18:37:24.250296  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.250377  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.250632  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.250710  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.251210  438797 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 18:37:24.251420  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.251234  438797 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 18:37:24.251463  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.251796  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36899
	I0819 18:37:24.252262  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.252281  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.252653  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.252671  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.252746  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.252781  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.253457  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.253490  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.254196  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.254376  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.254392  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.254448  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.254774  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.254973  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.255098  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.255978  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.255999  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.256131  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.256331  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.256506  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.256671  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.257072  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.257517  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.259107  438797 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 18:37:24.259176  438797 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 18:37:24.259255  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.259296  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.259405  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.259414  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.259588  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.259769  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.259918  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.260537  438797 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 18:37:24.260557  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 18:37:24.260575  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.261191  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.261497  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.261517  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.261736  438797 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 18:37:24.262076  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0819 18:37:24.262545  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.262854  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.263068  438797 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:37:24.263242  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.263261  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.263320  438797 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 18:37:24.263334  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 18:37:24.263358  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.263678  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.264050  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.264151  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.264189  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.264501  438797 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:37:24.264521  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 18:37:24.264541  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.264742  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.264771  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.265042  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.265064  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.265245  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.265510  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.265727  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.265790  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0819 18:37:24.266129  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.268271  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.268619  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.268729  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.268748  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.268907  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.269163  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.269340  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.269402  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.269577  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.270347  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.270378  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.270621  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.270880  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.271099  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.271305  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.271328  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.271378  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.271687  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.271877  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.273715  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.275532  438797 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 18:37:24.276715  438797 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 18:37:24.276735  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 18:37:24.276764  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.278274  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44933
	I0819 18:37:24.278721  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.279379  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.279399  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.279924  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36717
	I0819 18:37:24.280346  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.280618  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.280828  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.281119  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.281520  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.281539  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.281850  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.281870  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.281918  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.282121  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.282194  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.282831  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.283028  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.283209  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.284158  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.284225  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.286285  438797 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 18:37:24.286285  438797 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 18:37:24.287894  438797 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 18:37:24.287916  438797 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 18:37:24.287947  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.288226  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46539
	I0819 18:37:24.288633  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.289264  438797 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 18:37:24.289404  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40599
	I0819 18:37:24.289756  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.289773  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.289858  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.290119  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.290358  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.290378  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.290394  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.291718  438797 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 18:37:24.292001  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.292151  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.292321  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40049
	I0819 18:37:24.292522  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.292542  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.292890  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.292966  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.293004  438797 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 18:37:24.293020  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 18:37:24.293039  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.293048  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.293083  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39293
	I0819 18:37:24.293243  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.293331  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.293550  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.293568  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.293731  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.293883  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.293927  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.293997  438797 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 18:37:24.294123  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.294301  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.294539  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.294559  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.294954  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.295131  438797 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 18:37:24.295149  438797 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 18:37:24.295170  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.295207  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.296659  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.297918  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.298007  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.298283  438797 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 18:37:24.298546  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.298757  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.298821  438797 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 18:37:24.298841  438797 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 18:37:24.298808  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.298861  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.298884  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.298907  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.299087  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.299243  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.299399  438797 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 18:37:24.299415  438797 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 18:37:24.299433  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.299710  438797 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 18:37:24.299764  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.300325  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.300347  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.300554  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.300803  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.300947  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.301091  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.302174  438797 out.go:177]   - Using image docker.io/busybox:stable
	I0819 18:37:24.303034  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.303328  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.303429  438797 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 18:37:24.303444  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 18:37:24.303459  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.303483  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.303742  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.303769  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.303770  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.303935  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.303951  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.304097  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.304098  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.304181  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.304224  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.304270  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.304505  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.306846  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.307340  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.307368  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.307552  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.307772  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.307950  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.308080  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	W0819 18:37:24.317974  438797 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54264->192.168.39.241:22: read: connection reset by peer
	I0819 18:37:24.318014  438797 retry.go:31] will retry after 310.662789ms: ssh: handshake failed: read tcp 192.168.39.1:54264->192.168.39.241:22: read: connection reset by peer
	W0819 18:37:24.332819  438797 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54276->192.168.39.241:22: read: connection reset by peer
	I0819 18:37:24.332854  438797 retry.go:31] will retry after 321.912268ms: ssh: handshake failed: read tcp 192.168.39.1:54276->192.168.39.241:22: read: connection reset by peer
	W0819 18:37:24.332906  438797 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54292->192.168.39.241:22: read: connection reset by peer
	I0819 18:37:24.332912  438797 retry.go:31] will retry after 270.762609ms: ssh: handshake failed: read tcp 192.168.39.1:54292->192.168.39.241:22: read: connection reset by peer
	I0819 18:37:24.591822  438797 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0819 18:37:24.591845  438797 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0819 18:37:24.615928  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 18:37:24.643216  438797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:37:24.643261  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 18:37:24.652036  438797 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 18:37:24.652068  438797 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 18:37:24.708621  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 18:37:24.727172  438797 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 18:37:24.727205  438797 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 18:37:24.751768  438797 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 18:37:24.751790  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 18:37:24.755690  438797 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 18:37:24.755715  438797 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0819 18:37:24.762880  438797 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 18:37:24.762912  438797 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 18:37:24.764431  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 18:37:24.794316  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:37:24.798802  438797 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 18:37:24.798866  438797 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 18:37:24.844649  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 18:37:24.931835  438797 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 18:37:24.931870  438797 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 18:37:24.966646  438797 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 18:37:24.966671  438797 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 18:37:24.973163  438797 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 18:37:24.973193  438797 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 18:37:25.008176  438797 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 18:37:25.008208  438797 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 18:37:25.016675  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 18:37:25.018401  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 18:37:25.058294  438797 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 18:37:25.058329  438797 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 18:37:25.085070  438797 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 18:37:25.085106  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 18:37:25.095209  438797 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 18:37:25.095249  438797 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 18:37:25.111211  438797 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 18:37:25.111240  438797 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 18:37:25.192220  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 18:37:25.221680  438797 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 18:37:25.221719  438797 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 18:37:25.276169  438797 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 18:37:25.276204  438797 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 18:37:25.312028  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 18:37:25.319986  438797 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 18:37:25.320012  438797 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 18:37:25.322144  438797 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:37:25.322167  438797 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 18:37:25.333279  438797 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 18:37:25.333312  438797 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 18:37:25.483750  438797 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 18:37:25.483787  438797 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 18:37:25.506127  438797 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 18:37:25.506161  438797 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 18:37:25.539737  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:37:25.544159  438797 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 18:37:25.544182  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 18:37:25.561770  438797 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 18:37:25.561806  438797 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 18:37:25.700672  438797 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 18:37:25.700698  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 18:37:25.766362  438797 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 18:37:25.766397  438797 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 18:37:25.951045  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 18:37:25.974915  438797 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 18:37:25.974944  438797 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 18:37:26.023294  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 18:37:26.125523  438797 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 18:37:26.125564  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 18:37:26.279736  438797 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 18:37:26.279776  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 18:37:26.465274  438797 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 18:37:26.465307  438797 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 18:37:26.548678  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 18:37:26.837769  438797 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 18:37:26.837796  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 18:37:27.114269  438797 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 18:37:27.114293  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 18:37:27.464793  438797 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 18:37:27.464828  438797 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 18:37:27.675567  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 18:37:31.300777  438797 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 18:37:31.300829  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:31.304563  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:31.305063  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:31.305090  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:31.305315  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:31.305606  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:31.305807  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:31.306013  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:31.719945  438797 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 18:37:31.816761  438797 addons.go:234] Setting addon gcp-auth=true in "addons-966657"
	I0819 18:37:31.816824  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:31.817246  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:31.817297  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:31.833919  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45321
	I0819 18:37:31.834421  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:31.834916  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:31.834941  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:31.835314  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:31.835906  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:31.835933  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:31.852528  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39713
	I0819 18:37:31.852962  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:31.853617  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:31.853651  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:31.854121  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:31.854348  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:31.856179  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:31.856488  438797 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 18:37:31.856529  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:31.860126  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:31.860598  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:31.860627  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:31.860826  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:31.861001  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:31.861185  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:31.861380  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:32.647425  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.031457911s)
	I0819 18:37:32.647464  438797 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.004209553s)
	I0819 18:37:32.647493  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.647508  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.647506  438797 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.004211829s)
	I0819 18:37:32.647531  438797 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 18:37:32.647615  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.938957182s)
	I0819 18:37:32.647662  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.647676  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.647722  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.883239155s)
	I0819 18:37:32.647775  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.647789  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.647789  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.853434397s)
	I0819 18:37:32.647822  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.647832  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.647881  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.803209243s)
	I0819 18:37:32.647898  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.647907  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.647980  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.631266022s)
	I0819 18:37:32.647997  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.648006  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.648097  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.629660213s)
	I0819 18:37:32.648130  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.648144  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.648539  438797 node_ready.go:35] waiting up to 6m0s for node "addons-966657" to be "Ready" ...
	I0819 18:37:32.648686  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.456439405s)
	I0819 18:37:32.648708  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.648718  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.648751  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.336688692s)
	I0819 18:37:32.648777  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.648787  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.648828  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.109062582s)
	I0819 18:37:32.648843  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.648853  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.648869  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.697794495s)
	I0819 18:37:32.648888  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.648897  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.648985  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.625656079s)
	W0819 18:37:32.649010  438797 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 18:37:32.649029  438797 retry.go:31] will retry after 127.636736ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 18:37:32.649118  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.100404995s)
	I0819 18:37:32.649155  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.649164  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.651641  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.651678  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.651693  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.651702  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.651711  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.651718  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.651719  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.651727  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.651742  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.651750  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.651808  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.651830  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.651837  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.651845  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.651852  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.651890  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.651911  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.651919  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.651925  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.651933  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.651971  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.651990  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.651997  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.652005  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.652012  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.652049  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.652069  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.652076  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.652084  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.652090  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.652128  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.652149  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.652155  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.652163  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.652170  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.652207  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.652222  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.652244  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.652250  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.652257  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.652264  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.652307  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.652316  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.652325  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.652331  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.652366  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.652391  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.652397  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.652405  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.652411  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.652449  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.652469  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.652476  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.652483  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.652489  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.653375  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.653441  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.653449  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.653461  438797 addons.go:475] Verifying addon ingress=true in "addons-966657"
	I0819 18:37:32.653586  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.653637  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.653645  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.653874  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.653905  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.653932  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.653939  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.654203  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.654234  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.654242  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.654251  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.654262  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.654328  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.654341  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.654413  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.654427  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.655010  438797 out.go:177] * Verifying ingress addon...
	I0819 18:37:32.655798  438797 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-966657 service yakd-dashboard -n yakd-dashboard
	
	I0819 18:37:32.656199  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.656243  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.656245  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.656252  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.656269  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.656305  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.656312  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.656336  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.656374  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.656383  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.656474  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.656500  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.656508  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.656518  438797 addons.go:475] Verifying addon metrics-server=true in "addons-966657"
	I0819 18:37:32.656592  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.656612  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.656620  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.656626  438797 addons.go:475] Verifying addon registry=true in "addons-966657"
	I0819 18:37:32.656653  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.656669  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.656705  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.656718  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.656730  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.657499  438797 out.go:177] * Verifying registry addon...
	I0819 18:37:32.657931  438797 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 18:37:32.659515  438797 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 18:37:32.702808  438797 node_ready.go:49] node "addons-966657" has status "Ready":"True"
	I0819 18:37:32.702850  438797 node_ready.go:38] duration metric: took 54.267496ms for node "addons-966657" to be "Ready" ...
	I0819 18:37:32.702863  438797 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:37:32.727199  438797 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 18:37:32.727244  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:32.727345  438797 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 18:37:32.727372  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:32.748174  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.748211  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.748542  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.748571  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	W0819 18:37:32.748680  438797 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0819 18:37:32.768455  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.768491  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.768831  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.768853  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.777738  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 18:37:32.815685  438797 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fzk2l" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:32.884383  438797 pod_ready.go:93] pod "coredns-6f6b679f8f-fzk2l" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:32.884420  438797 pod_ready.go:82] duration metric: took 68.684278ms for pod "coredns-6f6b679f8f-fzk2l" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:32.884435  438797 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-h897n" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:32.960681  438797 pod_ready.go:93] pod "coredns-6f6b679f8f-h897n" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:32.960714  438797 pod_ready.go:82] duration metric: took 76.26993ms for pod "coredns-6f6b679f8f-h897n" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:32.960727  438797 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-966657" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.011264  438797 pod_ready.go:93] pod "etcd-addons-966657" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:33.011297  438797 pod_ready.go:82] duration metric: took 50.56125ms for pod "etcd-addons-966657" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.011311  438797 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-966657" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.349154  438797 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-966657" context rescaled to 1 replicas
	I0819 18:37:33.350195  438797 pod_ready.go:93] pod "kube-apiserver-addons-966657" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:33.350216  438797 pod_ready.go:82] duration metric: took 338.897988ms for pod "kube-apiserver-addons-966657" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.350228  438797 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-966657" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.351025  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:33.351090  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:33.370438  438797 pod_ready.go:93] pod "kube-controller-manager-addons-966657" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:33.370476  438797 pod_ready.go:82] duration metric: took 20.237055ms for pod "kube-controller-manager-addons-966657" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.370492  438797 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rthg8" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.460174  438797 pod_ready.go:93] pod "kube-proxy-rthg8" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:33.460200  438797 pod_ready.go:82] duration metric: took 89.69991ms for pod "kube-proxy-rthg8" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.460213  438797 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-966657" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.670653  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:33.674131  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:33.853823  438797 pod_ready.go:93] pod "kube-scheduler-addons-966657" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:33.853858  438797 pod_ready.go:82] duration metric: took 393.635436ms for pod "kube-scheduler-addons-966657" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.853874  438797 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:34.198456  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:34.198833  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:34.303889  438797 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.447373147s)
	I0819 18:37:34.304003  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.628370318s)
	I0819 18:37:34.304133  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:34.304151  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:34.304505  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:34.304531  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:34.304532  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:34.304548  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:34.304565  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:34.304855  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:34.304917  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:34.304935  438797 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-966657"
	I0819 18:37:34.305715  438797 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 18:37:34.306783  438797 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 18:37:34.308321  438797 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 18:37:34.309191  438797 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 18:37:34.309349  438797 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 18:37:34.309369  438797 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 18:37:34.332068  438797 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 18:37:34.332093  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:34.376715  438797 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 18:37:34.376744  438797 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 18:37:34.436011  438797 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 18:37:34.436036  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 18:37:34.499351  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 18:37:34.665613  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:34.666189  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:34.762678  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.984877697s)
	I0819 18:37:34.762756  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:34.762781  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:34.763164  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:34.763183  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:34.763194  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:34.763202  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:34.763562  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:34.763585  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:34.763621  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:34.817118  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:35.163136  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:35.164685  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:35.315275  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:35.729078  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:35.729892  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:35.787571  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.288165213s)
	I0819 18:37:35.787643  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:35.787661  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:35.788003  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:35.788083  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:35.788105  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:35.788123  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:35.788136  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:35.788381  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:35.788400  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:35.788403  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:35.790242  438797 addons.go:475] Verifying addon gcp-auth=true in "addons-966657"
	I0819 18:37:35.791774  438797 out.go:177] * Verifying gcp-auth addon...
	I0819 18:37:35.794041  438797 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 18:37:35.809576  438797 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 18:37:35.809601  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:35.819293  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:35.860120  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:36.163732  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:36.164540  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:36.298257  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:36.314565  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:36.665035  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:36.665908  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:36.797929  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:36.814770  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:37.167633  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:37.167789  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:37.297071  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:37.314122  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:37.662986  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:37.665405  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:37.798603  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:37.813969  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:38.161872  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:38.163363  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:38.298221  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:38.314265  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:38.358942  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:38.663918  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:38.665044  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:38.798273  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:38.813939  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:39.270039  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:39.270627  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:39.397251  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:39.398638  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:39.664129  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:39.664235  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:39.798254  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:39.814057  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:40.164230  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:40.165583  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:40.297275  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:40.313944  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:40.359258  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:40.663131  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:40.663501  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:40.798029  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:40.816456  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:41.162903  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:41.163680  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:41.305984  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:41.314852  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:41.825287  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:41.825513  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:41.825698  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:41.825951  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:42.172471  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:42.173004  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:42.297724  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:42.314264  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:42.360985  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:42.663852  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:42.666030  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:42.799105  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:42.814381  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:43.163257  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:43.163675  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:43.298549  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:43.314736  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:43.663896  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:43.664114  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:43.798425  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:43.814247  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:44.163656  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:44.164099  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:44.297260  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:44.314993  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:44.663065  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:44.663179  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:44.798617  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:44.818938  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:44.864305  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:45.162866  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:45.163009  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:45.298123  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:45.314935  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:45.662503  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:45.663236  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:45.797739  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:45.813970  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:46.162290  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:46.163521  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:46.299304  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:46.315049  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:46.663710  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:46.663717  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:46.797314  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:46.814144  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:47.162550  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:47.163970  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:47.297598  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:47.314614  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:47.359196  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:47.662319  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:47.663975  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:47.798281  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:47.814121  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:48.163341  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:48.164071  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:48.298170  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:48.314047  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:48.663903  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:48.664682  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:48.797523  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:48.814504  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:49.162478  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:49.164332  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:49.297802  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:49.313878  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:49.361006  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:49.714733  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:49.716385  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:49.798770  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:49.815209  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:50.162877  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:50.163116  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:50.298145  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:50.314602  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:50.662692  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:50.663897  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:50.798372  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:50.814301  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:51.165407  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:51.165761  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:51.297712  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:51.313912  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:51.666497  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:51.666768  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:51.797268  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:51.814149  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:51.859418  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:52.163687  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:52.164502  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:52.298309  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:52.314329  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:52.662887  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:52.663824  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:52.797941  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:52.814458  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:53.161949  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:53.164000  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:53.298478  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:53.314766  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:53.662549  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:53.663814  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:53.798665  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:53.813983  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:53.861355  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:54.164987  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:54.167746  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:54.298362  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:54.314321  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:54.663523  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:54.663821  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:54.798203  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:54.814529  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:55.163069  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:55.163794  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:55.297908  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:55.313503  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:55.663209  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:55.664542  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:55.798958  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:55.814128  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:56.162513  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:56.163999  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:56.298072  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:56.314872  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:56.360635  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:56.663115  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:56.663743  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:56.797849  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:56.899916  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:57.163293  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:57.163744  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:57.298240  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:57.314605  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:57.662479  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:57.663930  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:57.798708  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:57.813458  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:58.164222  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:58.164255  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:58.298519  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:58.314166  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:58.664309  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:58.664538  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:58.797993  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:58.813672  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:58.860874  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:59.162996  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:59.167582  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:59.298333  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:59.314347  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:59.662280  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:59.663332  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:59.797355  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:59.814034  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:00.162558  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:00.163536  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:00.298993  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:00.315065  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:00.663157  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:00.663301  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:00.797996  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:00.814188  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:01.163391  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:01.166470  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:01.296983  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:01.313232  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:01.360375  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:01.663376  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:01.674870  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:01.797699  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:01.813493  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:02.162651  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:02.166415  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:02.297460  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:02.315711  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:02.667224  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:02.668048  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:02.797799  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:02.814857  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:03.165005  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:03.165362  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:03.297448  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:03.314471  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:03.661883  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:03.663768  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:03.797679  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:03.813157  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:03.860259  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:04.162377  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:04.163452  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:04.298428  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:04.315747  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:04.662620  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:04.663715  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:04.797260  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:04.814032  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:05.163376  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:05.163795  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:05.298258  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:05.315189  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:05.662336  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:05.663775  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:05.798329  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:05.814294  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:06.163343  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:06.164896  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:06.298251  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:06.315443  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:06.364021  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:06.661937  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:06.664194  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:06.798245  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:06.813707  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:07.162627  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:07.163567  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:07.297659  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:07.313168  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:07.662273  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:07.663410  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:07.798736  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:07.814406  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:08.163513  438797 kapi.go:107] duration metric: took 35.503993674s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 18:38:08.164892  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:08.297791  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:08.314795  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:08.661927  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:08.798039  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:08.814314  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:08.860432  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:09.162905  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:09.298922  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:09.321824  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:09.662897  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:09.798274  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:09.814643  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:10.162642  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:10.297010  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:10.314652  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:10.663185  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:10.799793  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:10.814340  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:11.164049  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:11.298077  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:11.320867  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:11.364171  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:11.662255  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:11.798177  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:11.813936  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:12.161918  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:12.297500  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:12.314333  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:12.663257  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:12.798636  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:12.814675  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:13.162439  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:13.298300  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:13.319085  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:13.375892  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:13.664001  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:13.797532  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:13.813931  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:14.162534  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:14.297924  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:14.313845  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:14.662603  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:14.798242  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:14.813948  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:15.163036  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:15.297607  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:15.315189  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:15.662602  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:15.797711  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:15.814101  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:15.861648  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:16.162598  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:16.301105  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:16.315399  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:16.661954  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:16.797985  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:16.814124  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:17.545065  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:17.545616  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:17.546218  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:17.663006  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:17.797746  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:17.813982  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:18.163295  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:18.298203  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:18.314744  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:18.361196  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:18.662073  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:18.797883  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:18.813780  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:19.162523  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:19.304812  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:19.322776  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:19.663504  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:19.798180  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:19.814360  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:20.429771  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:20.430164  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:20.430337  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:20.430567  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:20.663064  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:20.799672  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:20.814637  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:21.162131  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:21.298267  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:21.314822  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:21.663453  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:21.797428  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:21.814525  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:22.163242  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:22.299162  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:22.400766  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:22.663207  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:22.798151  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:22.814339  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:22.860224  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:23.162636  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:23.297748  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:23.313334  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:23.661980  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:23.801829  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:23.813025  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:24.163337  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:24.297653  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:24.315140  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:24.662078  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:24.797887  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:24.814006  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:24.860745  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:25.167690  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:25.298317  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:25.314138  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:25.661912  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:25.798049  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:25.813953  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:26.166795  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:26.298405  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:26.314377  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:26.663192  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:26.797475  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:26.813982  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:26.872573  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:27.164167  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:27.298947  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:27.313675  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:27.662622  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:27.801449  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:27.814618  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:28.162087  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:28.297092  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:28.314334  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:28.669076  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:28.797584  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:28.815883  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:29.165957  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:29.298302  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:29.314499  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:29.360483  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:29.665721  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:29.799112  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:29.818048  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:30.164152  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:30.302436  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:30.315603  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:30.665010  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:30.798888  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:30.815476  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:31.163415  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:31.298352  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:31.316642  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:31.363183  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:31.662054  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:31.798338  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:31.814166  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:32.163746  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:32.298767  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:32.318404  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:32.663513  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:32.797786  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:32.813655  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:33.163948  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:33.298605  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:33.315898  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:33.371082  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:33.665949  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:33.799610  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:33.814030  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:34.163136  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:34.297596  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:34.314688  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:34.662405  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:34.797458  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:34.813882  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:35.164106  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:35.297932  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:35.315303  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:35.663213  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:35.797184  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:35.814031  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:35.860922  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:36.162749  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:36.297294  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:36.314111  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:36.679256  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:36.798807  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:36.901294  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:37.162901  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:37.297957  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:37.313971  438797 kapi.go:107] duration metric: took 1m3.004776059s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 18:38:37.662209  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:37.798513  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:37.874687  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:38.427743  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:38.430044  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:38.663741  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:38.798986  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:39.162507  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:39.297424  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:39.662593  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:39.797993  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:40.162215  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:40.298584  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:40.359501  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:40.662305  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:40.966022  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:41.163112  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:41.297230  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:41.662467  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:41.798476  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:42.163240  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:42.298064  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:42.361146  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:42.661939  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:42.798683  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:43.162456  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:43.299294  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:43.663039  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:43.797462  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:44.163384  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:44.298479  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:44.361389  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:44.663146  438797 kapi.go:107] duration metric: took 1m12.005213137s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 18:38:44.799974  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:45.298831  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:45.797173  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:46.298332  438797 kapi.go:107] duration metric: took 1m10.504287763s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 18:38:46.299998  438797 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-966657 cluster.
	I0819 18:38:46.301312  438797 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 18:38:46.302586  438797 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 18:38:46.303915  438797 out.go:177] * Enabled addons: nvidia-device-plugin, helm-tiller, cloud-spanner, ingress-dns, metrics-server, inspektor-gadget, storage-provisioner, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0819 18:38:46.305076  438797 addons.go:510] duration metric: took 1m22.179069136s for enable addons: enabled=[nvidia-device-plugin helm-tiller cloud-spanner ingress-dns metrics-server inspektor-gadget storage-provisioner yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0819 18:38:46.361484  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:48.860418  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:51.362075  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:53.860225  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:55.860329  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:57.862204  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:00.361106  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:02.860540  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:05.360976  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:07.861150  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:10.359698  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:12.362412  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:14.860896  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:17.360857  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:19.860955  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:20.361179  438797 pod_ready.go:93] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"True"
	I0819 18:39:20.361206  438797 pod_ready.go:82] duration metric: took 1m46.507324914s for pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace to be "Ready" ...
	I0819 18:39:20.361219  438797 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-pndfn" in "kube-system" namespace to be "Ready" ...
	I0819 18:39:20.367550  438797 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-pndfn" in "kube-system" namespace has status "Ready":"True"
	I0819 18:39:20.367583  438797 pod_ready.go:82] duration metric: took 6.357166ms for pod "nvidia-device-plugin-daemonset-pndfn" in "kube-system" namespace to be "Ready" ...
	I0819 18:39:20.367605  438797 pod_ready.go:39] duration metric: took 1m47.664730452s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:39:20.367625  438797 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:39:20.367656  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:39:20.367726  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:39:20.411676  438797 cri.go:89] found id: "da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677"
	I0819 18:39:20.411700  438797 cri.go:89] found id: ""
	I0819 18:39:20.411709  438797 logs.go:276] 1 containers: [da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677]
	I0819 18:39:20.411761  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:20.416138  438797 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:39:20.416206  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:39:20.454911  438797 cri.go:89] found id: "48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8"
	I0819 18:39:20.454936  438797 cri.go:89] found id: ""
	I0819 18:39:20.454944  438797 logs.go:276] 1 containers: [48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8]
	I0819 18:39:20.454994  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:20.459349  438797 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:39:20.459419  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:39:20.502874  438797 cri.go:89] found id: "197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc"
	I0819 18:39:20.502903  438797 cri.go:89] found id: ""
	I0819 18:39:20.502912  438797 logs.go:276] 1 containers: [197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc]
	I0819 18:39:20.502962  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:20.507279  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:39:20.507345  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:39:20.549289  438797 cri.go:89] found id: "56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685"
	I0819 18:39:20.549322  438797 cri.go:89] found id: ""
	I0819 18:39:20.549334  438797 logs.go:276] 1 containers: [56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685]
	I0819 18:39:20.549402  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:20.553374  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:39:20.553445  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:39:20.603168  438797 cri.go:89] found id: "f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc"
	I0819 18:39:20.603194  438797 cri.go:89] found id: ""
	I0819 18:39:20.603203  438797 logs.go:276] 1 containers: [f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc]
	I0819 18:39:20.603259  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:20.608087  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:39:20.608172  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:39:20.652582  438797 cri.go:89] found id: "ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892"
	I0819 18:39:20.652614  438797 cri.go:89] found id: ""
	I0819 18:39:20.652623  438797 logs.go:276] 1 containers: [ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892]
	I0819 18:39:20.652679  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:20.656708  438797 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:39:20.656804  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:39:20.698521  438797 cri.go:89] found id: ""
	I0819 18:39:20.698561  438797 logs.go:276] 0 containers: []
	W0819 18:39:20.698573  438797 logs.go:278] No container was found matching "kindnet"
	I0819 18:39:20.698587  438797 logs.go:123] Gathering logs for kubelet ...
	I0819 18:39:20.698603  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 18:39:20.744623  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:25 addons-966657 kubelet[1230]: W0819 18:37:25.870712    1230 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-966657" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-966657' and this object
	W0819 18:39:20.744798  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:25 addons-966657 kubelet[1230]: E0819 18:37:25.870758    1230 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:20.748613  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.882611    1230 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:20.748778  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.882652    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:20.748911  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.884604    1230 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:20.749074  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.884711    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	I0819 18:39:20.781709  438797 logs.go:123] Gathering logs for dmesg ...
	I0819 18:39:20.781746  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:39:20.797417  438797 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:39:20.797451  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:39:20.929424  438797 logs.go:123] Gathering logs for etcd [48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8] ...
	I0819 18:39:20.929465  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8"
	I0819 18:39:20.983555  438797 logs.go:123] Gathering logs for kube-apiserver [da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677] ...
	I0819 18:39:20.983600  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677"
	I0819 18:39:21.040017  438797 logs.go:123] Gathering logs for coredns [197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc] ...
	I0819 18:39:21.040054  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc"
	I0819 18:39:21.080692  438797 logs.go:123] Gathering logs for kube-scheduler [56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685] ...
	I0819 18:39:21.080729  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685"
	I0819 18:39:21.127313  438797 logs.go:123] Gathering logs for kube-proxy [f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc] ...
	I0819 18:39:21.127355  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc"
	I0819 18:39:21.164794  438797 logs.go:123] Gathering logs for kube-controller-manager [ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892] ...
	I0819 18:39:21.164829  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892"
	I0819 18:39:21.233559  438797 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:39:21.233603  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:39:22.088599  438797 logs.go:123] Gathering logs for container status ...
	I0819 18:39:22.088657  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:39:22.136415  438797 out.go:358] Setting ErrFile to fd 2...
	I0819 18:39:22.136446  438797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 18:39:22.136506  438797 out.go:270] X Problems detected in kubelet:
	W0819 18:39:22.136519  438797 out.go:270]   Aug 19 18:37:25 addons-966657 kubelet[1230]: E0819 18:37:25.870758    1230 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:22.136526  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.882611    1230 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:22.136538  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.882652    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:22.136546  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.884604    1230 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:22.136555  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.884711    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	I0819 18:39:22.136563  438797 out.go:358] Setting ErrFile to fd 2...
	I0819 18:39:22.136573  438797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:39:32.137239  438797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:39:32.156167  438797 api_server.go:72] duration metric: took 2m8.030177255s to wait for apiserver process to appear ...
	I0819 18:39:32.156209  438797 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:39:32.156261  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:39:32.156338  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:39:32.197168  438797 cri.go:89] found id: "da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677"
	I0819 18:39:32.197198  438797 cri.go:89] found id: ""
	I0819 18:39:32.197208  438797 logs.go:276] 1 containers: [da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677]
	I0819 18:39:32.197280  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:32.201510  438797 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:39:32.201606  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:39:32.241186  438797 cri.go:89] found id: "48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8"
	I0819 18:39:32.241225  438797 cri.go:89] found id: ""
	I0819 18:39:32.241235  438797 logs.go:276] 1 containers: [48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8]
	I0819 18:39:32.241293  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:32.245892  438797 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:39:32.245981  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:39:32.295547  438797 cri.go:89] found id: "197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc"
	I0819 18:39:32.295580  438797 cri.go:89] found id: ""
	I0819 18:39:32.295590  438797 logs.go:276] 1 containers: [197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc]
	I0819 18:39:32.295654  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:32.300315  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:39:32.300403  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:39:32.340431  438797 cri.go:89] found id: "56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685"
	I0819 18:39:32.340458  438797 cri.go:89] found id: ""
	I0819 18:39:32.340467  438797 logs.go:276] 1 containers: [56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685]
	I0819 18:39:32.340519  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:32.344857  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:39:32.344934  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:39:32.393242  438797 cri.go:89] found id: "f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc"
	I0819 18:39:32.393269  438797 cri.go:89] found id: ""
	I0819 18:39:32.393279  438797 logs.go:276] 1 containers: [f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc]
	I0819 18:39:32.393346  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:32.397711  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:39:32.397797  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:39:32.436248  438797 cri.go:89] found id: "ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892"
	I0819 18:39:32.436277  438797 cri.go:89] found id: ""
	I0819 18:39:32.436286  438797 logs.go:276] 1 containers: [ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892]
	I0819 18:39:32.436355  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:32.440604  438797 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:39:32.440685  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:39:32.484235  438797 cri.go:89] found id: ""
	I0819 18:39:32.484268  438797 logs.go:276] 0 containers: []
	W0819 18:39:32.484281  438797 logs.go:278] No container was found matching "kindnet"
	I0819 18:39:32.484294  438797 logs.go:123] Gathering logs for kubelet ...
	I0819 18:39:32.484309  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 18:39:32.533994  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:25 addons-966657 kubelet[1230]: W0819 18:37:25.870712    1230 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-966657" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-966657' and this object
	W0819 18:39:32.534168  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:25 addons-966657 kubelet[1230]: E0819 18:37:25.870758    1230 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:32.538060  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.882611    1230 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:32.538227  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.882652    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:32.538361  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.884604    1230 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:32.538526  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.884711    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	I0819 18:39:32.578443  438797 logs.go:123] Gathering logs for kube-scheduler [56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685] ...
	I0819 18:39:32.578493  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685"
	I0819 18:39:32.627803  438797 logs.go:123] Gathering logs for kube-controller-manager [ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892] ...
	I0819 18:39:32.627844  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892"
	I0819 18:39:32.688319  438797 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:39:32.688362  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:39:33.678100  438797 logs.go:123] Gathering logs for container status ...
	I0819 18:39:33.678156  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:39:33.725334  438797 logs.go:123] Gathering logs for dmesg ...
	I0819 18:39:33.725378  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:39:33.740220  438797 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:39:33.740266  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:39:33.851832  438797 logs.go:123] Gathering logs for kube-apiserver [da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677] ...
	I0819 18:39:33.851880  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677"
	I0819 18:39:33.898195  438797 logs.go:123] Gathering logs for etcd [48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8] ...
	I0819 18:39:33.898233  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8"
	I0819 18:39:33.955951  438797 logs.go:123] Gathering logs for coredns [197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc] ...
	I0819 18:39:33.956000  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc"
	I0819 18:39:33.994009  438797 logs.go:123] Gathering logs for kube-proxy [f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc] ...
	I0819 18:39:33.994057  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc"
	I0819 18:39:34.031336  438797 out.go:358] Setting ErrFile to fd 2...
	I0819 18:39:34.031366  438797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 18:39:34.031428  438797 out.go:270] X Problems detected in kubelet:
	W0819 18:39:34.031448  438797 out.go:270]   Aug 19 18:37:25 addons-966657 kubelet[1230]: E0819 18:37:25.870758    1230 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:34.031459  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.882611    1230 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:34.031470  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.882652    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:34.031480  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.884604    1230 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:34.031491  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.884711    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	I0819 18:39:34.031501  438797 out.go:358] Setting ErrFile to fd 2...
	I0819 18:39:34.031511  438797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:39:44.032883  438797 api_server.go:253] Checking apiserver healthz at https://192.168.39.241:8443/healthz ...
	I0819 18:39:44.038026  438797 api_server.go:279] https://192.168.39.241:8443/healthz returned 200:
	ok
	I0819 18:39:44.039150  438797 api_server.go:141] control plane version: v1.31.0
	I0819 18:39:44.039178  438797 api_server.go:131] duration metric: took 11.88296183s to wait for apiserver health ...
	I0819 18:39:44.039186  438797 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:39:44.039208  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:39:44.039257  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:39:44.078880  438797 cri.go:89] found id: "da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677"
	I0819 18:39:44.078907  438797 cri.go:89] found id: ""
	I0819 18:39:44.078917  438797 logs.go:276] 1 containers: [da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677]
	I0819 18:39:44.078985  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:44.083366  438797 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:39:44.083443  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:39:44.131025  438797 cri.go:89] found id: "48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8"
	I0819 18:39:44.131052  438797 cri.go:89] found id: ""
	I0819 18:39:44.131062  438797 logs.go:276] 1 containers: [48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8]
	I0819 18:39:44.131128  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:44.135340  438797 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:39:44.135415  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:39:44.177560  438797 cri.go:89] found id: "197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc"
	I0819 18:39:44.177584  438797 cri.go:89] found id: ""
	I0819 18:39:44.177593  438797 logs.go:276] 1 containers: [197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc]
	I0819 18:39:44.177659  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:44.182133  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:39:44.182212  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:39:44.221541  438797 cri.go:89] found id: "56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685"
	I0819 18:39:44.221569  438797 cri.go:89] found id: ""
	I0819 18:39:44.221577  438797 logs.go:276] 1 containers: [56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685]
	I0819 18:39:44.221633  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:44.225749  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:39:44.225838  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:39:44.268699  438797 cri.go:89] found id: "f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc"
	I0819 18:39:44.268730  438797 cri.go:89] found id: ""
	I0819 18:39:44.268739  438797 logs.go:276] 1 containers: [f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc]
	I0819 18:39:44.268803  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:44.272788  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:39:44.272881  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:39:44.310842  438797 cri.go:89] found id: "ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892"
	I0819 18:39:44.310876  438797 cri.go:89] found id: ""
	I0819 18:39:44.310887  438797 logs.go:276] 1 containers: [ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892]
	I0819 18:39:44.310956  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:44.315518  438797 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:39:44.315602  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:39:44.352641  438797 cri.go:89] found id: ""
	I0819 18:39:44.352670  438797 logs.go:276] 0 containers: []
	W0819 18:39:44.352679  438797 logs.go:278] No container was found matching "kindnet"
	I0819 18:39:44.352688  438797 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:39:44.352701  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:39:45.384989  438797 logs.go:123] Gathering logs for container status ...
	I0819 18:39:45.385060  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:39:45.432294  438797 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:39:45.432334  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:39:45.547783  438797 logs.go:123] Gathering logs for etcd [48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8] ...
	I0819 18:39:45.547826  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8"
	I0819 18:39:45.614208  438797 logs.go:123] Gathering logs for coredns [197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc] ...
	I0819 18:39:45.614261  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc"
	I0819 18:39:45.655480  438797 logs.go:123] Gathering logs for kube-scheduler [56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685] ...
	I0819 18:39:45.655518  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685"
	I0819 18:39:45.699611  438797 logs.go:123] Gathering logs for kube-proxy [f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc] ...
	I0819 18:39:45.699655  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc"
	I0819 18:39:45.734878  438797 logs.go:123] Gathering logs for kube-controller-manager [ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892] ...
	I0819 18:39:45.734914  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892"
	I0819 18:39:45.797678  438797 logs.go:123] Gathering logs for kubelet ...
	I0819 18:39:45.797739  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 18:39:45.840397  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:25 addons-966657 kubelet[1230]: W0819 18:37:25.870712    1230 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-966657" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-966657' and this object
	W0819 18:39:45.840578  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:25 addons-966657 kubelet[1230]: E0819 18:37:25.870758    1230 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:45.844409  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.882611    1230 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:45.844602  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.882652    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:45.844735  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.884604    1230 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:45.844899  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.884711    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	I0819 18:39:45.879301  438797 logs.go:123] Gathering logs for dmesg ...
	I0819 18:39:45.879340  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:39:45.895407  438797 logs.go:123] Gathering logs for kube-apiserver [da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677] ...
	I0819 18:39:45.895444  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677"
	I0819 18:39:45.951862  438797 out.go:358] Setting ErrFile to fd 2...
	I0819 18:39:45.951899  438797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 18:39:45.951973  438797 out.go:270] X Problems detected in kubelet:
	W0819 18:39:45.951981  438797 out.go:270]   Aug 19 18:37:25 addons-966657 kubelet[1230]: E0819 18:37:25.870758    1230 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:45.951988  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.882611    1230 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:45.951999  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.882652    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:45.952008  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.884604    1230 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:45.952016  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.884711    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	I0819 18:39:45.952023  438797 out.go:358] Setting ErrFile to fd 2...
	I0819 18:39:45.952029  438797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:39:55.961783  438797 system_pods.go:59] 18 kube-system pods found
	I0819 18:39:55.961824  438797 system_pods.go:61] "coredns-6f6b679f8f-fzk2l" [b3f241a1-fac9-48ca-aafa-0c699106ad16] Running
	I0819 18:39:55.961830  438797 system_pods.go:61] "csi-hostpath-attacher-0" [92ae9c6d-2f1c-41d7-b221-323290b08fb6] Running
	I0819 18:39:55.961834  438797 system_pods.go:61] "csi-hostpath-resizer-0" [d4a1242b-62ac-48ca-8aaa-3721c77678af] Running
	I0819 18:39:55.961838  438797 system_pods.go:61] "csi-hostpathplugin-rc72c" [f2007ce2-0f1c-494b-b7d7-7b77e3f41204] Running
	I0819 18:39:55.961842  438797 system_pods.go:61] "etcd-addons-966657" [4ba7a901-706b-467b-8544-5d6a45837b6f] Running
	I0819 18:39:55.961845  438797 system_pods.go:61] "kube-apiserver-addons-966657" [28b9be71-cbd9-42de-ab93-77a4123d1384] Running
	I0819 18:39:55.961848  438797 system_pods.go:61] "kube-controller-manager-addons-966657" [8dc3c7cb-03c1-4317-aeac-0ec1297748a0] Running
	I0819 18:39:55.961852  438797 system_pods.go:61] "kube-ingress-dns-minikube" [92385815-777d-486c-9a29-ea8247710fb6] Running
	I0819 18:39:55.961855  438797 system_pods.go:61] "kube-proxy-rthg8" [4baeaa36-3b6c-460f-8f0d-b41cc7d6ac26] Running
	I0819 18:39:55.961858  438797 system_pods.go:61] "kube-scheduler-addons-966657" [8a8c6ceb-8d89-4133-8630-a4256ee2677f] Running
	I0819 18:39:55.961860  438797 system_pods.go:61] "metrics-server-8988944d9-56ss9" [6ad30996-e1ba-4a2d-9054-f54a241e9efb] Running
	I0819 18:39:55.961864  438797 system_pods.go:61] "nvidia-device-plugin-daemonset-pndfn" [c413c9e7-9614-44c5-9845-3d2b40c62cba] Running
	I0819 18:39:55.961866  438797 system_pods.go:61] "registry-6fb4cdfc84-x89qh" [29139ceb-43bf-40ed-8a00-81e990604d2f] Running
	I0819 18:39:55.961869  438797 system_pods.go:61] "registry-proxy-jwchm" [b551e7e6-c198-454e-a913-a278aaa5bf0b] Running
	I0819 18:39:55.961873  438797 system_pods.go:61] "snapshot-controller-56fcc65765-95z9s" [8b3c99a9-f5c0-4457-ba35-4b57b693623a] Running
	I0819 18:39:55.961877  438797 system_pods.go:61] "snapshot-controller-56fcc65765-hjhg4" [25ae0391-8398-486d-9899-9a5c16b65da4] Running
	I0819 18:39:55.961880  438797 system_pods.go:61] "storage-provisioner" [f3f61185-366e-466a-8540-023b9332a231] Running
	I0819 18:39:55.961883  438797 system_pods.go:61] "tiller-deploy-b48cc5f79-vfspv" [6000c6c1-2382-4395-9752-1b553c6bd0a2] Running
	I0819 18:39:55.961890  438797 system_pods.go:74] duration metric: took 11.922697905s to wait for pod list to return data ...
	I0819 18:39:55.961897  438797 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:39:55.964736  438797 default_sa.go:45] found service account: "default"
	I0819 18:39:55.964771  438797 default_sa.go:55] duration metric: took 2.867593ms for default service account to be created ...
	I0819 18:39:55.964782  438797 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:39:55.972591  438797 system_pods.go:86] 18 kube-system pods found
	I0819 18:39:55.972625  438797 system_pods.go:89] "coredns-6f6b679f8f-fzk2l" [b3f241a1-fac9-48ca-aafa-0c699106ad16] Running
	I0819 18:39:55.972630  438797 system_pods.go:89] "csi-hostpath-attacher-0" [92ae9c6d-2f1c-41d7-b221-323290b08fb6] Running
	I0819 18:39:55.972634  438797 system_pods.go:89] "csi-hostpath-resizer-0" [d4a1242b-62ac-48ca-8aaa-3721c77678af] Running
	I0819 18:39:55.972638  438797 system_pods.go:89] "csi-hostpathplugin-rc72c" [f2007ce2-0f1c-494b-b7d7-7b77e3f41204] Running
	I0819 18:39:55.972641  438797 system_pods.go:89] "etcd-addons-966657" [4ba7a901-706b-467b-8544-5d6a45837b6f] Running
	I0819 18:39:55.972645  438797 system_pods.go:89] "kube-apiserver-addons-966657" [28b9be71-cbd9-42de-ab93-77a4123d1384] Running
	I0819 18:39:55.972648  438797 system_pods.go:89] "kube-controller-manager-addons-966657" [8dc3c7cb-03c1-4317-aeac-0ec1297748a0] Running
	I0819 18:39:55.972653  438797 system_pods.go:89] "kube-ingress-dns-minikube" [92385815-777d-486c-9a29-ea8247710fb6] Running
	I0819 18:39:55.972656  438797 system_pods.go:89] "kube-proxy-rthg8" [4baeaa36-3b6c-460f-8f0d-b41cc7d6ac26] Running
	I0819 18:39:55.972659  438797 system_pods.go:89] "kube-scheduler-addons-966657" [8a8c6ceb-8d89-4133-8630-a4256ee2677f] Running
	I0819 18:39:55.972662  438797 system_pods.go:89] "metrics-server-8988944d9-56ss9" [6ad30996-e1ba-4a2d-9054-f54a241e9efb] Running
	I0819 18:39:55.972665  438797 system_pods.go:89] "nvidia-device-plugin-daemonset-pndfn" [c413c9e7-9614-44c5-9845-3d2b40c62cba] Running
	I0819 18:39:55.972672  438797 system_pods.go:89] "registry-6fb4cdfc84-x89qh" [29139ceb-43bf-40ed-8a00-81e990604d2f] Running
	I0819 18:39:55.972675  438797 system_pods.go:89] "registry-proxy-jwchm" [b551e7e6-c198-454e-a913-a278aaa5bf0b] Running
	I0819 18:39:55.972678  438797 system_pods.go:89] "snapshot-controller-56fcc65765-95z9s" [8b3c99a9-f5c0-4457-ba35-4b57b693623a] Running
	I0819 18:39:55.972681  438797 system_pods.go:89] "snapshot-controller-56fcc65765-hjhg4" [25ae0391-8398-486d-9899-9a5c16b65da4] Running
	I0819 18:39:55.972685  438797 system_pods.go:89] "storage-provisioner" [f3f61185-366e-466a-8540-023b9332a231] Running
	I0819 18:39:55.972688  438797 system_pods.go:89] "tiller-deploy-b48cc5f79-vfspv" [6000c6c1-2382-4395-9752-1b553c6bd0a2] Running
	I0819 18:39:55.972694  438797 system_pods.go:126] duration metric: took 7.907113ms to wait for k8s-apps to be running ...
	I0819 18:39:55.972702  438797 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:39:55.972753  438797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:39:55.987949  438797 system_svc.go:56] duration metric: took 15.23428ms WaitForService to wait for kubelet
	I0819 18:39:55.988070  438797 kubeadm.go:582] duration metric: took 2m31.862008825s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:39:55.988126  438797 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:39:55.991337  438797 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:39:55.991372  438797 node_conditions.go:123] node cpu capacity is 2
	I0819 18:39:55.991390  438797 node_conditions.go:105] duration metric: took 3.258111ms to run NodePressure ...
	I0819 18:39:55.991407  438797 start.go:241] waiting for startup goroutines ...
	I0819 18:39:55.991417  438797 start.go:246] waiting for cluster config update ...
	I0819 18:39:55.991439  438797 start.go:255] writing updated cluster config ...
	I0819 18:39:55.991763  438797 ssh_runner.go:195] Run: rm -f paused
	I0819 18:39:56.046693  438797 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:39:56.048564  438797 out.go:177] * Done! kubectl is now configured to use "addons-966657" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 18:43:04 addons-966657 crio[678]: time="2024-08-19 18:43:04.901610820Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092984901545803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0f2a6e1-9336-4cd2-ba77-fc41d73e9a23 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:43:04 addons-966657 crio[678]: time="2024-08-19 18:43:04.902205263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ae24d3c-b88d-41c9-a902-b798d49dcbc5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:43:04 addons-966657 crio[678]: time="2024-08-19 18:43:04.902274390Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ae24d3c-b88d-41c9-a902-b798d49dcbc5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:43:04 addons-966657 crio[678]: time="2024-08-19 18:43:04.902772472Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f68392913c1ef75853565e07e9df2d568477c89acdc7d92319c0569de3402fbc,PodSandboxId:5a2ec64f99d20e30565c1a324b4e09615fed9e9a1824e46ba1646ddef5722318,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724092977362547599,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-pk2z9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad9ba1ac-d896-4b35-a244-cb2eeaa052ab,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de02a8dc20f05b40d74c104934523b5394479cbe0bfc2a7561b7b252d2cd77b8,PodSandboxId:78482e936d69a95ce711d42ddb4bf237568a5559194d1d0c02cf99c6f95b36b5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724092838368839511,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7779fae-ee4a-477e-8939-1538b08b9407,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8ee3211c4b7ccde1e371cce8c18b036120962956af84f30af1eea165a1d299,PodSandboxId:be6fdbbd0e74c71a307c454e6d018009f7144b8fd5d78a35fbaf8d337c20b020,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724092798359155854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a470222-a47f-4dff-b
bbd-80c1c0ad3058,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cece58731c973105cb8b2f86d7fb142d880f0b729a1099ff653f34de5f451d5,PodSandboxId:3233449ed066c63beebb3210c1c3ca56aac2609c2880673f83803b6de461b3c8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724092706034041595,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2vmrf,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 182faf0c-7d87-4277-a4fe-e09a8e0cc76a,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a444400fafc0e5bd72414ae8df6762aca895cb2b36c72efa544a21a9ea48b3f8,PodSandboxId:427d408222ea20f242b60663673f0de8a447b46c372a8bd1978fd045da351c10,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724092703718252716,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-m4jfb,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fd1df0-f1a2-48fb-b9cf-932fcd5d3e06,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1976b4f5fd018ad82870990fff98b9aacc811dacc169b3db8699d711fb9a977c,PodSandboxId:18b60c4088f3c3eeef93af9d2788b9c2b0e7bf1989da331af554ed99bf5720c8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724092700529237517,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes
.pod.name: local-path-provisioner-86d989889c-7rt79,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a45c0af7-5e2d-4b0a-87e0-eca7079e429c,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fefad11c7927f5b51629838d488410c65b28c9e5cef897622aecc85ee8f5269,PodSandboxId:45c10fa74e7d57175de2368f7ed69fe5fdb39d0f5f4d939cae05f01960499a8a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724092689934674569,Labels:map[string]string{
io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-56ss9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad30996-e1ba-4a2d-9054-f54a241e9efb,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2a083912efd25f0af7a098d4001fc072b2f79f7a70dad39be5a2f71444bab6,PodSandboxId:a6c1288294afeabf56e5f2b179f122e3546f34f25581b2c969e3b25454e134a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724092651097137947,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3f61185-366e-466a-8540-023b9332a231,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc,PodSandboxId:286057f9bd85c480e302c4e60b7a3ec06a1cb6c39ab7ede8d742fda63b4f6345,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTA
INER_RUNNING,CreatedAt:1724092646288928922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fzk2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f241a1-fac9-48ca-aafa-0c699106ad16,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc,PodSandboxId:30d55afa4ea740d9a8e2ce7eca2b534dfbae555a453daf570015af4eb1b204fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f9
33eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724092645112645591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rthg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4baeaa36-3b6c-460f-8f0d-b41cc7d6ac26,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685,PodSandboxId:ac2707a4235eaf409decd100f5860a6363440a764462927ad1c8bac4ca2fd2c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f72
9c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724092633517939036,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 717d506cf9569961af71f86e9ac27e29,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677,PodSandboxId:a83464ab9a456d1d58cc7c15f20e20a92fee5da49457aa1b5ddac7a813e649bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724092633485811297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 288b4b601d8383b517860a32412ecddd,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8,PodSandboxId:bd05ad205c9bbcc28daa7843afcb3c8d9de9553d8527af469fb03e663cf59f46,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724092633469256040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d246e74ff84d05d8ef33ef849aa3e3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892,PodSandboxId:177205b0fa8540ec961c98690a932bcc10dc74b5648ae022d3a6bff7828cd551,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724092633425469231,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a18de3373db5c4569356a3d2e49ce8b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ae24d3c-b88d-41c9-a902-b798d49dcbc5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:43:04 addons-966657 crio[678]: time="2024-08-19 18:43:04.942532549Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e2efc69c-3aca-4098-bb30-2742ea78e53e name=/runtime.v1.RuntimeService/Version
	Aug 19 18:43:04 addons-966657 crio[678]: time="2024-08-19 18:43:04.942609440Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2efc69c-3aca-4098-bb30-2742ea78e53e name=/runtime.v1.RuntimeService/Version
	Aug 19 18:43:04 addons-966657 crio[678]: time="2024-08-19 18:43:04.943999453Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b462ccf1-2c15-46f9-bc68-6c57222778ec name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:43:04 addons-966657 crio[678]: time="2024-08-19 18:43:04.945266072Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092984945240084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b462ccf1-2c15-46f9-bc68-6c57222778ec name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:43:04 addons-966657 crio[678]: time="2024-08-19 18:43:04.945851745Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11744afc-50f5-4862-af38-24e1a8b2b1a4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:43:04 addons-966657 crio[678]: time="2024-08-19 18:43:04.945938548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11744afc-50f5-4862-af38-24e1a8b2b1a4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:43:04 addons-966657 crio[678]: time="2024-08-19 18:43:04.946316868Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f68392913c1ef75853565e07e9df2d568477c89acdc7d92319c0569de3402fbc,PodSandboxId:5a2ec64f99d20e30565c1a324b4e09615fed9e9a1824e46ba1646ddef5722318,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724092977362547599,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-pk2z9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad9ba1ac-d896-4b35-a244-cb2eeaa052ab,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de02a8dc20f05b40d74c104934523b5394479cbe0bfc2a7561b7b252d2cd77b8,PodSandboxId:78482e936d69a95ce711d42ddb4bf237568a5559194d1d0c02cf99c6f95b36b5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724092838368839511,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7779fae-ee4a-477e-8939-1538b08b9407,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8ee3211c4b7ccde1e371cce8c18b036120962956af84f30af1eea165a1d299,PodSandboxId:be6fdbbd0e74c71a307c454e6d018009f7144b8fd5d78a35fbaf8d337c20b020,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724092798359155854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a470222-a47f-4dff-b
bbd-80c1c0ad3058,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cece58731c973105cb8b2f86d7fb142d880f0b729a1099ff653f34de5f451d5,PodSandboxId:3233449ed066c63beebb3210c1c3ca56aac2609c2880673f83803b6de461b3c8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724092706034041595,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2vmrf,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 182faf0c-7d87-4277-a4fe-e09a8e0cc76a,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a444400fafc0e5bd72414ae8df6762aca895cb2b36c72efa544a21a9ea48b3f8,PodSandboxId:427d408222ea20f242b60663673f0de8a447b46c372a8bd1978fd045da351c10,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724092703718252716,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-m4jfb,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fd1df0-f1a2-48fb-b9cf-932fcd5d3e06,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1976b4f5fd018ad82870990fff98b9aacc811dacc169b3db8699d711fb9a977c,PodSandboxId:18b60c4088f3c3eeef93af9d2788b9c2b0e7bf1989da331af554ed99bf5720c8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724092700529237517,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes
.pod.name: local-path-provisioner-86d989889c-7rt79,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a45c0af7-5e2d-4b0a-87e0-eca7079e429c,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fefad11c7927f5b51629838d488410c65b28c9e5cef897622aecc85ee8f5269,PodSandboxId:45c10fa74e7d57175de2368f7ed69fe5fdb39d0f5f4d939cae05f01960499a8a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724092689934674569,Labels:map[string]string{
io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-56ss9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad30996-e1ba-4a2d-9054-f54a241e9efb,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2a083912efd25f0af7a098d4001fc072b2f79f7a70dad39be5a2f71444bab6,PodSandboxId:a6c1288294afeabf56e5f2b179f122e3546f34f25581b2c969e3b25454e134a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724092651097137947,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3f61185-366e-466a-8540-023b9332a231,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc,PodSandboxId:286057f9bd85c480e302c4e60b7a3ec06a1cb6c39ab7ede8d742fda63b4f6345,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTA
INER_RUNNING,CreatedAt:1724092646288928922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fzk2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f241a1-fac9-48ca-aafa-0c699106ad16,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc,PodSandboxId:30d55afa4ea740d9a8e2ce7eca2b534dfbae555a453daf570015af4eb1b204fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f9
33eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724092645112645591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rthg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4baeaa36-3b6c-460f-8f0d-b41cc7d6ac26,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685,PodSandboxId:ac2707a4235eaf409decd100f5860a6363440a764462927ad1c8bac4ca2fd2c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f72
9c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724092633517939036,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 717d506cf9569961af71f86e9ac27e29,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677,PodSandboxId:a83464ab9a456d1d58cc7c15f20e20a92fee5da49457aa1b5ddac7a813e649bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724092633485811297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 288b4b601d8383b517860a32412ecddd,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8,PodSandboxId:bd05ad205c9bbcc28daa7843afcb3c8d9de9553d8527af469fb03e663cf59f46,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724092633469256040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d246e74ff84d05d8ef33ef849aa3e3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892,PodSandboxId:177205b0fa8540ec961c98690a932bcc10dc74b5648ae022d3a6bff7828cd551,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724092633425469231,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a18de3373db5c4569356a3d2e49ce8b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11744afc-50f5-4862-af38-24e1a8b2b1a4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:43:04 addons-966657 crio[678]: time="2024-08-19 18:43:04.983526979Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20e0259c-c2e3-4d62-a91d-e2ee0ebde6a6 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:43:04 addons-966657 crio[678]: time="2024-08-19 18:43:04.983654667Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20e0259c-c2e3-4d62-a91d-e2ee0ebde6a6 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:43:04 addons-966657 crio[678]: time="2024-08-19 18:43:04.989447047Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9115666-ae60-4b12-b549-f240b42d2ecb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:43:04 addons-966657 crio[678]: time="2024-08-19 18:43:04.991242734Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092984991008623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9115666-ae60-4b12-b549-f240b42d2ecb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:43:04 addons-966657 crio[678]: time="2024-08-19 18:43:04.992630640Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ca03fd3-3375-42a7-bcc9-440f9e89ad8a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:43:04 addons-966657 crio[678]: time="2024-08-19 18:43:04.992707244Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ca03fd3-3375-42a7-bcc9-440f9e89ad8a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:43:04 addons-966657 crio[678]: time="2024-08-19 18:43:04.992977260Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f68392913c1ef75853565e07e9df2d568477c89acdc7d92319c0569de3402fbc,PodSandboxId:5a2ec64f99d20e30565c1a324b4e09615fed9e9a1824e46ba1646ddef5722318,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724092977362547599,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-pk2z9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad9ba1ac-d896-4b35-a244-cb2eeaa052ab,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de02a8dc20f05b40d74c104934523b5394479cbe0bfc2a7561b7b252d2cd77b8,PodSandboxId:78482e936d69a95ce711d42ddb4bf237568a5559194d1d0c02cf99c6f95b36b5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724092838368839511,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7779fae-ee4a-477e-8939-1538b08b9407,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8ee3211c4b7ccde1e371cce8c18b036120962956af84f30af1eea165a1d299,PodSandboxId:be6fdbbd0e74c71a307c454e6d018009f7144b8fd5d78a35fbaf8d337c20b020,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724092798359155854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a470222-a47f-4dff-b
bbd-80c1c0ad3058,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cece58731c973105cb8b2f86d7fb142d880f0b729a1099ff653f34de5f451d5,PodSandboxId:3233449ed066c63beebb3210c1c3ca56aac2609c2880673f83803b6de461b3c8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724092706034041595,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2vmrf,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 182faf0c-7d87-4277-a4fe-e09a8e0cc76a,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a444400fafc0e5bd72414ae8df6762aca895cb2b36c72efa544a21a9ea48b3f8,PodSandboxId:427d408222ea20f242b60663673f0de8a447b46c372a8bd1978fd045da351c10,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724092703718252716,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-m4jfb,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fd1df0-f1a2-48fb-b9cf-932fcd5d3e06,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1976b4f5fd018ad82870990fff98b9aacc811dacc169b3db8699d711fb9a977c,PodSandboxId:18b60c4088f3c3eeef93af9d2788b9c2b0e7bf1989da331af554ed99bf5720c8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724092700529237517,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes
.pod.name: local-path-provisioner-86d989889c-7rt79,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a45c0af7-5e2d-4b0a-87e0-eca7079e429c,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fefad11c7927f5b51629838d488410c65b28c9e5cef897622aecc85ee8f5269,PodSandboxId:45c10fa74e7d57175de2368f7ed69fe5fdb39d0f5f4d939cae05f01960499a8a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724092689934674569,Labels:map[string]string{
io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-56ss9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad30996-e1ba-4a2d-9054-f54a241e9efb,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2a083912efd25f0af7a098d4001fc072b2f79f7a70dad39be5a2f71444bab6,PodSandboxId:a6c1288294afeabf56e5f2b179f122e3546f34f25581b2c969e3b25454e134a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724092651097137947,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3f61185-366e-466a-8540-023b9332a231,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc,PodSandboxId:286057f9bd85c480e302c4e60b7a3ec06a1cb6c39ab7ede8d742fda63b4f6345,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTA
INER_RUNNING,CreatedAt:1724092646288928922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fzk2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f241a1-fac9-48ca-aafa-0c699106ad16,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc,PodSandboxId:30d55afa4ea740d9a8e2ce7eca2b534dfbae555a453daf570015af4eb1b204fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f9
33eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724092645112645591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rthg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4baeaa36-3b6c-460f-8f0d-b41cc7d6ac26,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685,PodSandboxId:ac2707a4235eaf409decd100f5860a6363440a764462927ad1c8bac4ca2fd2c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f72
9c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724092633517939036,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 717d506cf9569961af71f86e9ac27e29,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677,PodSandboxId:a83464ab9a456d1d58cc7c15f20e20a92fee5da49457aa1b5ddac7a813e649bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724092633485811297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 288b4b601d8383b517860a32412ecddd,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8,PodSandboxId:bd05ad205c9bbcc28daa7843afcb3c8d9de9553d8527af469fb03e663cf59f46,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724092633469256040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d246e74ff84d05d8ef33ef849aa3e3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892,PodSandboxId:177205b0fa8540ec961c98690a932bcc10dc74b5648ae022d3a6bff7828cd551,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724092633425469231,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a18de3373db5c4569356a3d2e49ce8b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ca03fd3-3375-42a7-bcc9-440f9e89ad8a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:43:05 addons-966657 crio[678]: time="2024-08-19 18:43:05.028465337Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c18cf35-42b4-4239-b976-b8910e1e22d2 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:43:05 addons-966657 crio[678]: time="2024-08-19 18:43:05.028617384Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c18cf35-42b4-4239-b976-b8910e1e22d2 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:43:05 addons-966657 crio[678]: time="2024-08-19 18:43:05.030417538Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81899962-2e9d-4711-9e4e-280e99e51028 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:43:05 addons-966657 crio[678]: time="2024-08-19 18:43:05.032755404Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092985032723734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81899962-2e9d-4711-9e4e-280e99e51028 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:43:05 addons-966657 crio[678]: time="2024-08-19 18:43:05.033495113Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6fc087e8-edc5-41ab-8a25-9b10921b7d1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:43:05 addons-966657 crio[678]: time="2024-08-19 18:43:05.033754040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6fc087e8-edc5-41ab-8a25-9b10921b7d1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:43:05 addons-966657 crio[678]: time="2024-08-19 18:43:05.034143094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f68392913c1ef75853565e07e9df2d568477c89acdc7d92319c0569de3402fbc,PodSandboxId:5a2ec64f99d20e30565c1a324b4e09615fed9e9a1824e46ba1646ddef5722318,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724092977362547599,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-pk2z9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad9ba1ac-d896-4b35-a244-cb2eeaa052ab,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de02a8dc20f05b40d74c104934523b5394479cbe0bfc2a7561b7b252d2cd77b8,PodSandboxId:78482e936d69a95ce711d42ddb4bf237568a5559194d1d0c02cf99c6f95b36b5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724092838368839511,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7779fae-ee4a-477e-8939-1538b08b9407,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8ee3211c4b7ccde1e371cce8c18b036120962956af84f30af1eea165a1d299,PodSandboxId:be6fdbbd0e74c71a307c454e6d018009f7144b8fd5d78a35fbaf8d337c20b020,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724092798359155854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a470222-a47f-4dff-b
bbd-80c1c0ad3058,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cece58731c973105cb8b2f86d7fb142d880f0b729a1099ff653f34de5f451d5,PodSandboxId:3233449ed066c63beebb3210c1c3ca56aac2609c2880673f83803b6de461b3c8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724092706034041595,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2vmrf,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 182faf0c-7d87-4277-a4fe-e09a8e0cc76a,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a444400fafc0e5bd72414ae8df6762aca895cb2b36c72efa544a21a9ea48b3f8,PodSandboxId:427d408222ea20f242b60663673f0de8a447b46c372a8bd1978fd045da351c10,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724092703718252716,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-m4jfb,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fd1df0-f1a2-48fb-b9cf-932fcd5d3e06,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1976b4f5fd018ad82870990fff98b9aacc811dacc169b3db8699d711fb9a977c,PodSandboxId:18b60c4088f3c3eeef93af9d2788b9c2b0e7bf1989da331af554ed99bf5720c8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724092700529237517,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes
.pod.name: local-path-provisioner-86d989889c-7rt79,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a45c0af7-5e2d-4b0a-87e0-eca7079e429c,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fefad11c7927f5b51629838d488410c65b28c9e5cef897622aecc85ee8f5269,PodSandboxId:45c10fa74e7d57175de2368f7ed69fe5fdb39d0f5f4d939cae05f01960499a8a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724092689934674569,Labels:map[string]string{
io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-56ss9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad30996-e1ba-4a2d-9054-f54a241e9efb,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2a083912efd25f0af7a098d4001fc072b2f79f7a70dad39be5a2f71444bab6,PodSandboxId:a6c1288294afeabf56e5f2b179f122e3546f34f25581b2c969e3b25454e134a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724092651097137947,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3f61185-366e-466a-8540-023b9332a231,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc,PodSandboxId:286057f9bd85c480e302c4e60b7a3ec06a1cb6c39ab7ede8d742fda63b4f6345,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTA
INER_RUNNING,CreatedAt:1724092646288928922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fzk2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f241a1-fac9-48ca-aafa-0c699106ad16,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc,PodSandboxId:30d55afa4ea740d9a8e2ce7eca2b534dfbae555a453daf570015af4eb1b204fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f9
33eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724092645112645591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rthg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4baeaa36-3b6c-460f-8f0d-b41cc7d6ac26,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685,PodSandboxId:ac2707a4235eaf409decd100f5860a6363440a764462927ad1c8bac4ca2fd2c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f72
9c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724092633517939036,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 717d506cf9569961af71f86e9ac27e29,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677,PodSandboxId:a83464ab9a456d1d58cc7c15f20e20a92fee5da49457aa1b5ddac7a813e649bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724092633485811297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 288b4b601d8383b517860a32412ecddd,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8,PodSandboxId:bd05ad205c9bbcc28daa7843afcb3c8d9de9553d8527af469fb03e663cf59f46,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724092633469256040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d246e74ff84d05d8ef33ef849aa3e3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892,PodSandboxId:177205b0fa8540ec961c98690a932bcc10dc74b5648ae022d3a6bff7828cd551,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724092633425469231,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a18de3373db5c4569356a3d2e49ce8b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6fc087e8-edc5-41ab-8a25-9b10921b7d1d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f68392913c1ef       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   5a2ec64f99d20       hello-world-app-55bf9c44b4-pk2z9
	de02a8dc20f05       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                              2 minutes ago       Running             nginx                     0                   78482e936d69a       nginx
	4d8ee3211c4b7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   be6fdbbd0e74c       busybox
	9cece58731c97       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   4 minutes ago       Exited              patch                     0                   3233449ed066c       ingress-nginx-admission-patch-2vmrf
	a444400fafc0e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   4 minutes ago       Exited              create                    0                   427d408222ea2       ingress-nginx-admission-create-m4jfb
	1976b4f5fd018       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   18b60c4088f3c       local-path-provisioner-86d989889c-7rt79
	9fefad11c7927       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   45c10fa74e7d5       metrics-server-8988944d9-56ss9
	ea2a083912efd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   a6c1288294afe       storage-provisioner
	197c6a1ef6e6e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   286057f9bd85c       coredns-6f6b679f8f-fzk2l
	f4040c311d32e       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             5 minutes ago       Running             kube-proxy                0                   30d55afa4ea74       kube-proxy-rthg8
	56311b94f99b4       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             5 minutes ago       Running             kube-scheduler            0                   ac2707a4235ea       kube-scheduler-addons-966657
	da32522f010e9       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             5 minutes ago       Running             kube-apiserver            0                   a83464ab9a456       kube-apiserver-addons-966657
	48c646c07f67b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   bd05ad205c9bb       etcd-addons-966657
	ea9adb58c21d9       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             5 minutes ago       Running             kube-controller-manager   0                   177205b0fa854       kube-controller-manager-addons-966657
	
	
	==> coredns [197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc] <==
	[INFO] 10.244.0.7:43368 - 46601 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093133s
	[INFO] 10.244.0.7:51189 - 6785 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006301s
	[INFO] 10.244.0.7:51189 - 60575 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000056339s
	[INFO] 10.244.0.7:39776 - 17391 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000054947s
	[INFO] 10.244.0.7:39776 - 22509 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000097421s
	[INFO] 10.244.0.7:58029 - 37893 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000074145s
	[INFO] 10.244.0.7:58029 - 57095 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000057243s
	[INFO] 10.244.0.7:55527 - 3863 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000095928s
	[INFO] 10.244.0.7:55527 - 63274 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000108164s
	[INFO] 10.244.0.7:45700 - 51892 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000084018s
	[INFO] 10.244.0.7:45700 - 41915 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038365s
	[INFO] 10.244.0.7:58784 - 45245 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000069768s
	[INFO] 10.244.0.7:58784 - 37823 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000036065s
	[INFO] 10.244.0.7:58667 - 62946 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000040009s
	[INFO] 10.244.0.7:58667 - 30688 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000029142s
	[INFO] 10.244.0.22:41944 - 3024 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000264702s
	[INFO] 10.244.0.22:37263 - 58847 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000072612s
	[INFO] 10.244.0.22:36429 - 42311 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000250297s
	[INFO] 10.244.0.22:55387 - 14377 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000189539s
	[INFO] 10.244.0.22:41338 - 35053 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000097156s
	[INFO] 10.244.0.22:60395 - 7931 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000133663s
	[INFO] 10.244.0.22:52321 - 45381 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000469332s
	[INFO] 10.244.0.22:58548 - 41996 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000907551s
	[INFO] 10.244.0.25:47377 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00024668s
	[INFO] 10.244.0.25:57387 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000153301s
	
	
	==> describe nodes <==
	Name:               addons-966657
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-966657
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=addons-966657
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T18_37_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-966657
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:37:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-966657
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:42:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:41:23 +0000   Mon, 19 Aug 2024 18:37:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:41:23 +0000   Mon, 19 Aug 2024 18:37:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:41:23 +0000   Mon, 19 Aug 2024 18:37:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:41:23 +0000   Mon, 19 Aug 2024 18:37:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    addons-966657
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 e37e8403430242cdba308e6f37608b12
	  System UUID:                e37e8403-4302-42cd-ba30-8e6f37608b12
	  Boot ID:                    37c397af-beed-4978-aa9c-52347a7b6c21
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	  default                     hello-world-app-55bf9c44b4-pk2z9           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 coredns-6f6b679f8f-fzk2l                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m42s
	  kube-system                 etcd-addons-966657                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m47s
	  kube-system                 kube-apiserver-addons-966657               250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 kube-controller-manager-addons-966657      200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 kube-proxy-rthg8                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-scheduler-addons-966657               100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 metrics-server-8988944d9-56ss9             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         5m36s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  local-path-storage          local-path-provisioner-86d989889c-7rt79    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m38s                  kube-proxy       
	  Normal  Starting                 5m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m53s (x8 over 5m53s)  kubelet          Node addons-966657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m53s (x8 over 5m53s)  kubelet          Node addons-966657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m53s (x7 over 5m53s)  kubelet          Node addons-966657 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m47s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m47s                  kubelet          Node addons-966657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m47s                  kubelet          Node addons-966657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m47s                  kubelet          Node addons-966657 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m46s                  kubelet          Node addons-966657 status is now: NodeReady
	  Normal  RegisteredNode           5m43s                  node-controller  Node addons-966657 event: Registered Node addons-966657 in Controller
	
	
	==> dmesg <==
	[  +6.264403] kauditd_printk_skb: 74 callbacks suppressed
	[ +10.483896] kauditd_printk_skb: 12 callbacks suppressed
	[Aug19 18:38] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.588865] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.087675] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.261550] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.172549] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.183281] kauditd_printk_skb: 78 callbacks suppressed
	[  +6.631030] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.316158] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.169179] kauditd_printk_skb: 28 callbacks suppressed
	[Aug19 18:39] kauditd_printk_skb: 28 callbacks suppressed
	[Aug19 18:40] kauditd_printk_skb: 45 callbacks suppressed
	[ +10.799620] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.380656] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.110541] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.028516] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.371677] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.808182] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.334805] kauditd_printk_skb: 20 callbacks suppressed
	[Aug19 18:41] kauditd_printk_skb: 22 callbacks suppressed
	[  +7.051454] kauditd_printk_skb: 16 callbacks suppressed
	[ +10.148527] kauditd_printk_skb: 45 callbacks suppressed
	[Aug19 18:42] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.249167] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8] <==
	{"level":"info","ts":"2024-08-19T18:38:20.404737Z","caller":"traceutil/trace.go:171","msg":"trace[873809080] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1015; }","duration":"161.918979ms","start":"2024-08-19T18:38:20.242813Z","end":"2024-08-19T18:38:20.404732Z","steps":["trace[873809080] 'agreement among raft nodes before linearized reading'  (duration: 161.881395ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:38:20.404593Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.511009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:38:20.404850Z","caller":"traceutil/trace.go:171","msg":"trace[621295776] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1015; }","duration":"104.771318ms","start":"2024-08-19T18:38:20.300069Z","end":"2024-08-19T18:38:20.404840Z","steps":["trace[621295776] 'agreement among raft nodes before linearized reading'  (duration: 104.411402ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:38:20.404888Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.551775ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:38:20.404921Z","caller":"traceutil/trace.go:171","msg":"trace[322440735] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1015; }","duration":"256.583445ms","start":"2024-08-19T18:38:20.148333Z","end":"2024-08-19T18:38:20.404916Z","steps":["trace[322440735] 'agreement among raft nodes before linearized reading'  (duration: 256.544319ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:38:20.404867Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.190298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-8988944d9-56ss9.17ed352b09f160e2\" ","response":"range_response_count:1 size:813"}
	{"level":"info","ts":"2024-08-19T18:38:20.405156Z","caller":"traceutil/trace.go:171","msg":"trace[1655684797] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-8988944d9-56ss9.17ed352b09f160e2; range_end:; response_count:1; response_revision:1015; }","duration":"196.476499ms","start":"2024-08-19T18:38:20.208670Z","end":"2024-08-19T18:38:20.405147Z","steps":["trace[1655684797] 'agreement among raft nodes before linearized reading'  (duration: 196.148765ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:38:38.410871Z","caller":"traceutil/trace.go:171","msg":"trace[1839811041] linearizableReadLoop","detail":"{readStateIndex:1185; appliedIndex:1184; }","duration":"263.61331ms","start":"2024-08-19T18:38:38.147246Z","end":"2024-08-19T18:38:38.410859Z","steps":["trace[1839811041] 'read index received'  (duration: 263.493041ms)","trace[1839811041] 'applied index is now lower than readState.Index'  (duration: 118.211µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T18:38:38.411118Z","caller":"traceutil/trace.go:171","msg":"trace[2004033757] transaction","detail":"{read_only:false; response_revision:1156; number_of_response:1; }","duration":"520.108398ms","start":"2024-08-19T18:38:37.891000Z","end":"2024-08-19T18:38:38.411108Z","steps":["trace[2004033757] 'process raft request'  (duration: 519.777125ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:38:38.411215Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:38:37.890981Z","time spent":"520.160379ms","remote":"127.0.0.1:36512","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-q5gnfjrrokitcbfm5yuyhcdsa4\" mod_revision:1067 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-q5gnfjrrokitcbfm5yuyhcdsa4\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-q5gnfjrrokitcbfm5yuyhcdsa4\" > >"}
	{"level":"warn","ts":"2024-08-19T18:38:38.411330Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"264.083331ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:38:38.411347Z","caller":"traceutil/trace.go:171","msg":"trace[1308101096] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1156; }","duration":"264.101011ms","start":"2024-08-19T18:38:38.147241Z","end":"2024-08-19T18:38:38.411342Z","steps":["trace[1308101096] 'agreement among raft nodes before linearized reading'  (duration: 264.069715ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:38:38.411486Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.592848ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:38:38.411501Z","caller":"traceutil/trace.go:171","msg":"trace[906152090] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1156; }","duration":"127.608306ms","start":"2024-08-19T18:38:38.283888Z","end":"2024-08-19T18:38:38.411496Z","steps":["trace[906152090] 'agreement among raft nodes before linearized reading'  (duration: 127.584451ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:38:40.948177Z","caller":"traceutil/trace.go:171","msg":"trace[415449643] transaction","detail":"{read_only:false; response_revision:1160; number_of_response:1; }","duration":"267.679812ms","start":"2024-08-19T18:38:40.680488Z","end":"2024-08-19T18:38:40.948168Z","steps":["trace[415449643] 'process raft request'  (duration: 267.34972ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:38:40.947979Z","caller":"traceutil/trace.go:171","msg":"trace[781183671] linearizableReadLoop","detail":"{readStateIndex:1189; appliedIndex:1188; }","duration":"164.213847ms","start":"2024-08-19T18:38:40.783752Z","end":"2024-08-19T18:38:40.947966Z","steps":["trace[781183671] 'read index received'  (duration: 164.001961ms)","trace[781183671] 'applied index is now lower than readState.Index'  (duration: 211.414µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:38:40.949226Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.471905ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:38:40.949250Z","caller":"traceutil/trace.go:171","msg":"trace[607320068] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1160; }","duration":"165.511773ms","start":"2024-08-19T18:38:40.783731Z","end":"2024-08-19T18:38:40.949243Z","steps":["trace[607320068] 'agreement among raft nodes before linearized reading'  (duration: 165.454728ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:38:40.949631Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.123303ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-8988944d9-56ss9\" ","response":"range_response_count:1 size:4561"}
	{"level":"info","ts":"2024-08-19T18:38:40.949674Z","caller":"traceutil/trace.go:171","msg":"trace[617807425] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-8988944d9-56ss9; range_end:; response_count:1; response_revision:1160; }","duration":"106.170312ms","start":"2024-08-19T18:38:40.843497Z","end":"2024-08-19T18:38:40.949667Z","steps":["trace[617807425] 'agreement among raft nodes before linearized reading'  (duration: 106.053752ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:38:40.950651Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.327409ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:38:40.950712Z","caller":"traceutil/trace.go:171","msg":"trace[25983320] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1160; }","duration":"126.389804ms","start":"2024-08-19T18:38:40.824312Z","end":"2024-08-19T18:38:40.950702Z","steps":["trace[25983320] 'agreement among raft nodes before linearized reading'  (duration: 125.185404ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:40:31.061501Z","caller":"traceutil/trace.go:171","msg":"trace[1138980623] transaction","detail":"{read_only:false; response_revision:1562; number_of_response:1; }","duration":"102.7546ms","start":"2024-08-19T18:40:30.958725Z","end":"2024-08-19T18:40:31.061480Z","steps":["trace[1138980623] 'process raft request'  (duration: 102.597922ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:40:59.890595Z","caller":"traceutil/trace.go:171","msg":"trace[1933265775] transaction","detail":"{read_only:false; response_revision:1760; number_of_response:1; }","duration":"216.764463ms","start":"2024-08-19T18:40:59.673812Z","end":"2024-08-19T18:40:59.890576Z","steps":["trace[1933265775] 'process raft request'  (duration: 216.493ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:41:33.466753Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"275.145574ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9737505237065975663 > lease_revoke:<id:0722916bedd2c6a6>","response":"size:27"}
	
	
	==> kernel <==
	 18:43:05 up 6 min,  0 users,  load average: 0.74, 1.21, 0.71
	Linux addons-966657 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677] <==
	E0819 18:39:20.251223       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.104.75:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.104.75:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.104.75:443: connect: connection refused" logger="UnhandledError"
	I0819 18:39:20.311600       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0819 18:40:05.512922       1 conn.go:339] Error on socket receive: read tcp 192.168.39.241:8443->192.168.39.1:44922: use of closed network connection
	E0819 18:40:05.704749       1 conn.go:339] Error on socket receive: read tcp 192.168.39.241:8443->192.168.39.1:44940: use of closed network connection
	E0819 18:40:23.495244       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.241:8443->10.244.0.24:58902: read: connection reset by peer
	I0819 18:40:30.195814       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0819 18:40:30.440439       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.131.123"}
	I0819 18:40:31.117332       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0819 18:40:32.244939       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0819 18:40:42.399800       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0819 18:40:55.999786       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.89.241"}
	I0819 18:41:18.410925       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:41:18.414691       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 18:41:18.434729       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:41:18.434781       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 18:41:18.464656       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:41:18.464827       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 18:41:18.471932       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:41:18.472028       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 18:41:18.513129       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:41:18.513183       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0819 18:41:19.472217       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0819 18:41:19.513721       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0819 18:41:19.597465       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0819 18:42:54.891341       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.175.220"}
	
	
	==> kube-controller-manager [ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892] <==
	E0819 18:41:40.442753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:41:49.796693       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:41:49.796852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:41:52.868203       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:41:52.868334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:41:54.056713       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:41:54.056836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:42:23.074806       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:42:23.074856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:42:29.554001       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:42:29.554054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:42:32.099283       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:42:32.099407       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:42:35.864171       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:42:35.864318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 18:42:54.705755       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="30.731307ms"
	I0819 18:42:54.721293       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="15.448253ms"
	I0819 18:42:54.724893       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="92.567µs"
	I0819 18:42:57.060837       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0819 18:42:57.070591       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="3.735µs"
	I0819 18:42:57.087502       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0819 18:42:58.438252       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.364795ms"
	I0819 18:42:58.438444       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="40.921µs"
	W0819 18:43:03.125727       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:43:03.125854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 18:37:26.355820       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 18:37:26.400777       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.241"]
	E0819 18:37:26.400871       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:37:26.524250       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 18:37:26.524293       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 18:37:26.524315       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:37:26.543850       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:37:26.544129       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:37:26.544159       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:37:26.554007       1 config.go:197] "Starting service config controller"
	I0819 18:37:26.554033       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:37:26.554050       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:37:26.554053       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:37:26.554452       1 config.go:326] "Starting node config controller"
	I0819 18:37:26.554460       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:37:26.654928       1 shared_informer.go:320] Caches are synced for node config
	I0819 18:37:26.654996       1 shared_informer.go:320] Caches are synced for service config
	I0819 18:37:26.655036       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685] <==
	W0819 18:37:16.879186       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 18:37:16.879287       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 18:37:16.906280       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 18:37:16.906947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:16.914015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 18:37:16.914469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:17.067947       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 18:37:17.068417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:17.068928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 18:37:17.069311       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:17.103717       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 18:37:17.103781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:17.103840       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 18:37:17.103852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:17.124597       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 18:37:17.124645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:17.142738       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 18:37:17.142788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:17.153947       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 18:37:17.154018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:17.189418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 18:37:17.189467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:17.284630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 18:37:17.284680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0819 18:37:18.829236       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 18:42:54 addons-966657 kubelet[1230]: I0819 18:42:54.698831    1230 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4a1242b-62ac-48ca-8aaa-3721c77678af" containerName="csi-resizer"
	Aug 19 18:42:54 addons-966657 kubelet[1230]: I0819 18:42:54.807918    1230 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crzlx\" (UniqueName: \"kubernetes.io/projected/ad9ba1ac-d896-4b35-a244-cb2eeaa052ab-kube-api-access-crzlx\") pod \"hello-world-app-55bf9c44b4-pk2z9\" (UID: \"ad9ba1ac-d896-4b35-a244-cb2eeaa052ab\") " pod="default/hello-world-app-55bf9c44b4-pk2z9"
	Aug 19 18:42:55 addons-966657 kubelet[1230]: I0819 18:42:55.922728    1230 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb2z2\" (UniqueName: \"kubernetes.io/projected/92385815-777d-486c-9a29-ea8247710fb6-kube-api-access-vb2z2\") pod \"92385815-777d-486c-9a29-ea8247710fb6\" (UID: \"92385815-777d-486c-9a29-ea8247710fb6\") "
	Aug 19 18:42:55 addons-966657 kubelet[1230]: I0819 18:42:55.925232    1230 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92385815-777d-486c-9a29-ea8247710fb6-kube-api-access-vb2z2" (OuterVolumeSpecName: "kube-api-access-vb2z2") pod "92385815-777d-486c-9a29-ea8247710fb6" (UID: "92385815-777d-486c-9a29-ea8247710fb6"). InnerVolumeSpecName "kube-api-access-vb2z2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 18:42:56 addons-966657 kubelet[1230]: I0819 18:42:56.023656    1230 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vb2z2\" (UniqueName: \"kubernetes.io/projected/92385815-777d-486c-9a29-ea8247710fb6-kube-api-access-vb2z2\") on node \"addons-966657\" DevicePath \"\""
	Aug 19 18:42:56 addons-966657 kubelet[1230]: I0819 18:42:56.397222    1230 scope.go:117] "RemoveContainer" containerID="c94bfc857819cb4a1a1debd9d26b8e4a890e7f6794fdd9e894d4012afae94ebd"
	Aug 19 18:42:56 addons-966657 kubelet[1230]: I0819 18:42:56.437451    1230 scope.go:117] "RemoveContainer" containerID="c94bfc857819cb4a1a1debd9d26b8e4a890e7f6794fdd9e894d4012afae94ebd"
	Aug 19 18:42:56 addons-966657 kubelet[1230]: E0819 18:42:56.438151    1230 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c94bfc857819cb4a1a1debd9d26b8e4a890e7f6794fdd9e894d4012afae94ebd\": container with ID starting with c94bfc857819cb4a1a1debd9d26b8e4a890e7f6794fdd9e894d4012afae94ebd not found: ID does not exist" containerID="c94bfc857819cb4a1a1debd9d26b8e4a890e7f6794fdd9e894d4012afae94ebd"
	Aug 19 18:42:56 addons-966657 kubelet[1230]: I0819 18:42:56.438185    1230 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c94bfc857819cb4a1a1debd9d26b8e4a890e7f6794fdd9e894d4012afae94ebd"} err="failed to get container status \"c94bfc857819cb4a1a1debd9d26b8e4a890e7f6794fdd9e894d4012afae94ebd\": rpc error: code = NotFound desc = could not find container \"c94bfc857819cb4a1a1debd9d26b8e4a890e7f6794fdd9e894d4012afae94ebd\": container with ID starting with c94bfc857819cb4a1a1debd9d26b8e4a890e7f6794fdd9e894d4012afae94ebd not found: ID does not exist"
	Aug 19 18:42:56 addons-966657 kubelet[1230]: I0819 18:42:56.612966    1230 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92385815-777d-486c-9a29-ea8247710fb6" path="/var/lib/kubelet/pods/92385815-777d-486c-9a29-ea8247710fb6/volumes"
	Aug 19 18:42:58 addons-966657 kubelet[1230]: I0819 18:42:58.613538    1230 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="182faf0c-7d87-4277-a4fe-e09a8e0cc76a" path="/var/lib/kubelet/pods/182faf0c-7d87-4277-a4fe-e09a8e0cc76a/volumes"
	Aug 19 18:42:58 addons-966657 kubelet[1230]: I0819 18:42:58.613916    1230 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0fd1df0-f1a2-48fb-b9cf-932fcd5d3e06" path="/var/lib/kubelet/pods/e0fd1df0-f1a2-48fb-b9cf-932fcd5d3e06/volumes"
	Aug 19 18:42:58 addons-966657 kubelet[1230]: E0819 18:42:58.864831    1230 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092978864303795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:42:58 addons-966657 kubelet[1230]: E0819 18:42:58.864869    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092978864303795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:43:00 addons-966657 kubelet[1230]: I0819 18:43:00.359078    1230 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nzpx\" (UniqueName: \"kubernetes.io/projected/7430553d-faa6-4183-9e3e-c9737a635c22-kube-api-access-5nzpx\") pod \"7430553d-faa6-4183-9e3e-c9737a635c22\" (UID: \"7430553d-faa6-4183-9e3e-c9737a635c22\") "
	Aug 19 18:43:00 addons-966657 kubelet[1230]: I0819 18:43:00.359141    1230 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7430553d-faa6-4183-9e3e-c9737a635c22-webhook-cert\") pod \"7430553d-faa6-4183-9e3e-c9737a635c22\" (UID: \"7430553d-faa6-4183-9e3e-c9737a635c22\") "
	Aug 19 18:43:00 addons-966657 kubelet[1230]: I0819 18:43:00.361172    1230 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7430553d-faa6-4183-9e3e-c9737a635c22-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7430553d-faa6-4183-9e3e-c9737a635c22" (UID: "7430553d-faa6-4183-9e3e-c9737a635c22"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 19 18:43:00 addons-966657 kubelet[1230]: I0819 18:43:00.363394    1230 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7430553d-faa6-4183-9e3e-c9737a635c22-kube-api-access-5nzpx" (OuterVolumeSpecName: "kube-api-access-5nzpx") pod "7430553d-faa6-4183-9e3e-c9737a635c22" (UID: "7430553d-faa6-4183-9e3e-c9737a635c22"). InnerVolumeSpecName "kube-api-access-5nzpx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 18:43:00 addons-966657 kubelet[1230]: I0819 18:43:00.427311    1230 scope.go:117] "RemoveContainer" containerID="cb089dbbf89f5f402613840da163a60e38aaf3a742feef38f237747759940060"
	Aug 19 18:43:00 addons-966657 kubelet[1230]: I0819 18:43:00.448696    1230 scope.go:117] "RemoveContainer" containerID="cb089dbbf89f5f402613840da163a60e38aaf3a742feef38f237747759940060"
	Aug 19 18:43:00 addons-966657 kubelet[1230]: E0819 18:43:00.449192    1230 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb089dbbf89f5f402613840da163a60e38aaf3a742feef38f237747759940060\": container with ID starting with cb089dbbf89f5f402613840da163a60e38aaf3a742feef38f237747759940060 not found: ID does not exist" containerID="cb089dbbf89f5f402613840da163a60e38aaf3a742feef38f237747759940060"
	Aug 19 18:43:00 addons-966657 kubelet[1230]: I0819 18:43:00.449218    1230 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb089dbbf89f5f402613840da163a60e38aaf3a742feef38f237747759940060"} err="failed to get container status \"cb089dbbf89f5f402613840da163a60e38aaf3a742feef38f237747759940060\": rpc error: code = NotFound desc = could not find container \"cb089dbbf89f5f402613840da163a60e38aaf3a742feef38f237747759940060\": container with ID starting with cb089dbbf89f5f402613840da163a60e38aaf3a742feef38f237747759940060 not found: ID does not exist"
	Aug 19 18:43:00 addons-966657 kubelet[1230]: I0819 18:43:00.459622    1230 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7430553d-faa6-4183-9e3e-c9737a635c22-webhook-cert\") on node \"addons-966657\" DevicePath \"\""
	Aug 19 18:43:00 addons-966657 kubelet[1230]: I0819 18:43:00.459666    1230 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5nzpx\" (UniqueName: \"kubernetes.io/projected/7430553d-faa6-4183-9e3e-c9737a635c22-kube-api-access-5nzpx\") on node \"addons-966657\" DevicePath \"\""
	Aug 19 18:43:00 addons-966657 kubelet[1230]: I0819 18:43:00.613685    1230 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7430553d-faa6-4183-9e3e-c9737a635c22" path="/var/lib/kubelet/pods/7430553d-faa6-4183-9e3e-c9737a635c22/volumes"
	
	
	==> storage-provisioner [ea2a083912efd25f0af7a098d4001fc072b2f79f7a70dad39be5a2f71444bab6] <==
	I0819 18:37:32.612606       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 18:37:32.725501       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 18:37:32.725561       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 18:37:32.847665       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 18:37:32.847871       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-966657_f4ac03dc-fca5-4d4d-a3f4-c51e5cb1ff01!
	I0819 18:37:32.848972       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4c115955-4dc8-4394-944c-0691b9016828", APIVersion:"v1", ResourceVersion:"754", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-966657_f4ac03dc-fca5-4d4d-a3f4-c51e5cb1ff01 became leader
	I0819 18:37:33.048632       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-966657_f4ac03dc-fca5-4d4d-a3f4-c51e5cb1ff01!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-966657 -n addons-966657
helpers_test.go:261: (dbg) Run:  kubectl --context addons-966657 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (156.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (309.06s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.096523ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-56ss9" [6ad30996-e1ba-4a2d-9054-f54a241e9efb] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005488324s
addons_test.go:417: (dbg) Run:  kubectl --context addons-966657 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966657 top pods -n kube-system: exit status 1 (82.141289ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fzk2l, age: 3m7.852990242s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966657 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966657 top pods -n kube-system: exit status 1 (74.549056ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fzk2l, age: 3m11.158728034s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966657 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966657 top pods -n kube-system: exit status 1 (67.569601ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fzk2l, age: 3m17.830142986s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966657 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966657 top pods -n kube-system: exit status 1 (84.440051ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fzk2l, age: 3m25.141309876s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966657 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966657 top pods -n kube-system: exit status 1 (90.296384ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fzk2l, age: 3m40.042949046s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966657 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966657 top pods -n kube-system: exit status 1 (68.644113ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fzk2l, age: 4m0.509913913s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966657 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966657 top pods -n kube-system: exit status 1 (68.996238ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fzk2l, age: 4m26.377368544s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966657 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966657 top pods -n kube-system: exit status 1 (67.349992ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fzk2l, age: 4m53.414573338s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966657 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966657 top pods -n kube-system: exit status 1 (66.349403ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fzk2l, age: 5m47.926059766s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966657 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966657 top pods -n kube-system: exit status 1 (65.314263ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fzk2l, age: 7m16.533996507s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966657 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966657 top pods -n kube-system: exit status 1 (67.976367ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fzk2l, age: 8m9.142903389s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-966657 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-966657 -n addons-966657
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-966657 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-966657 logs -n 25: (1.267730185s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-087609                                                                     | download-only-087609 | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC | 19 Aug 24 18:36 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-219006 | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC |                     |
	|         | binary-mirror-219006                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40397                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-219006                                                                     | binary-mirror-219006 | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC | 19 Aug 24 18:36 UTC |
	| addons  | enable dashboard -p                                                                         | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC |                     |
	|         | addons-966657                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC |                     |
	|         | addons-966657                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-966657 --wait=true                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC | 19 Aug 24 18:39 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-966657 addons disable                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-966657 addons disable                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-966657 addons disable                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-966657 ip                                                                            | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	| addons  | addons-966657 addons disable                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	|         | addons-966657                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-966657 ssh curl -s                                                                   | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-966657 ssh cat                                                                       | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	|         | /opt/local-path-provisioner/pvc-bdc7ef98-d7dd-48c4-baf5-5803f9aa11e7_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-966657 addons disable                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	|         | -p addons-966657                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:40 UTC | 19 Aug 24 18:40 UTC |
	|         | -p addons-966657                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-966657 addons disable                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:41 UTC | 19 Aug 24 18:41 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-966657 addons                                                                        | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:41 UTC | 19 Aug 24 18:41 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-966657 addons                                                                        | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:41 UTC | 19 Aug 24 18:41 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:41 UTC | 19 Aug 24 18:41 UTC |
	|         | addons-966657                                                                               |                      |         |         |                     |                     |
	| ip      | addons-966657 ip                                                                            | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:42 UTC | 19 Aug 24 18:42 UTC |
	| addons  | addons-966657 addons disable                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:42 UTC | 19 Aug 24 18:42 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-966657 addons disable                                                                | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:42 UTC | 19 Aug 24 18:43 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-966657 addons                                                                        | addons-966657        | jenkins | v1.33.1 | 19 Aug 24 18:45 UTC | 19 Aug 24 18:45 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:36:40
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:36:40.661591  438797 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:36:40.661702  438797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:36:40.661709  438797 out.go:358] Setting ErrFile to fd 2...
	I0819 18:36:40.661716  438797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:36:40.661910  438797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 18:36:40.662603  438797 out.go:352] Setting JSON to false
	I0819 18:36:40.663577  438797 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8352,"bootTime":1724084249,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:36:40.663643  438797 start.go:139] virtualization: kvm guest
	I0819 18:36:40.665523  438797 out.go:177] * [addons-966657] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:36:40.666647  438797 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 18:36:40.666677  438797 notify.go:220] Checking for updates...
	I0819 18:36:40.668812  438797 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:36:40.669997  438797 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 18:36:40.671302  438797 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 18:36:40.672532  438797 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:36:40.673661  438797 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:36:40.674802  438797 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 18:36:40.707429  438797 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 18:36:40.708534  438797 start.go:297] selected driver: kvm2
	I0819 18:36:40.708562  438797 start.go:901] validating driver "kvm2" against <nil>
	I0819 18:36:40.708574  438797 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:36:40.709416  438797 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:36:40.709522  438797 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:36:40.725935  438797 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:36:40.726015  438797 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 18:36:40.726224  438797 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:36:40.726293  438797 cni.go:84] Creating CNI manager for ""
	I0819 18:36:40.726305  438797 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:36:40.726313  438797 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 18:36:40.726363  438797 start.go:340] cluster config:
	{Name:addons-966657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-966657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:36:40.726455  438797 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:36:40.728235  438797 out.go:177] * Starting "addons-966657" primary control-plane node in "addons-966657" cluster
	I0819 18:36:40.729379  438797 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:36:40.729431  438797 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:36:40.729442  438797 cache.go:56] Caching tarball of preloaded images
	I0819 18:36:40.729538  438797 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:36:40.729549  438797 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:36:40.729841  438797 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/config.json ...
	I0819 18:36:40.729862  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/config.json: {Name:mk2e4ced8a52cff2912bf206bbef7911649fae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:36:40.730012  438797 start.go:360] acquireMachinesLock for addons-966657: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:36:40.730058  438797 start.go:364] duration metric: took 32.114µs to acquireMachinesLock for "addons-966657"
	I0819 18:36:40.730078  438797 start.go:93] Provisioning new machine with config: &{Name:addons-966657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-966657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:36:40.730138  438797 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 18:36:40.731652  438797 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0819 18:36:40.731789  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:36:40.731816  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:36:40.746558  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33451
	I0819 18:36:40.747093  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:36:40.747669  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:36:40.747709  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:36:40.748099  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:36:40.748323  438797 main.go:141] libmachine: (addons-966657) Calling .GetMachineName
	I0819 18:36:40.748477  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:36:40.748622  438797 start.go:159] libmachine.API.Create for "addons-966657" (driver="kvm2")
	I0819 18:36:40.748650  438797 client.go:168] LocalClient.Create starting
	I0819 18:36:40.748693  438797 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem
	I0819 18:36:40.904320  438797 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem
	I0819 18:36:41.071567  438797 main.go:141] libmachine: Running pre-create checks...
	I0819 18:36:41.071599  438797 main.go:141] libmachine: (addons-966657) Calling .PreCreateCheck
	I0819 18:36:41.072189  438797 main.go:141] libmachine: (addons-966657) Calling .GetConfigRaw
	I0819 18:36:41.072709  438797 main.go:141] libmachine: Creating machine...
	I0819 18:36:41.072727  438797 main.go:141] libmachine: (addons-966657) Calling .Create
	I0819 18:36:41.072886  438797 main.go:141] libmachine: (addons-966657) Creating KVM machine...
	I0819 18:36:41.074232  438797 main.go:141] libmachine: (addons-966657) DBG | found existing default KVM network
	I0819 18:36:41.075035  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:41.074891  438819 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c10}
	I0819 18:36:41.075089  438797 main.go:141] libmachine: (addons-966657) DBG | created network xml: 
	I0819 18:36:41.075114  438797 main.go:141] libmachine: (addons-966657) DBG | <network>
	I0819 18:36:41.075125  438797 main.go:141] libmachine: (addons-966657) DBG |   <name>mk-addons-966657</name>
	I0819 18:36:41.075139  438797 main.go:141] libmachine: (addons-966657) DBG |   <dns enable='no'/>
	I0819 18:36:41.075148  438797 main.go:141] libmachine: (addons-966657) DBG |   
	I0819 18:36:41.075159  438797 main.go:141] libmachine: (addons-966657) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 18:36:41.075168  438797 main.go:141] libmachine: (addons-966657) DBG |     <dhcp>
	I0819 18:36:41.075178  438797 main.go:141] libmachine: (addons-966657) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 18:36:41.075188  438797 main.go:141] libmachine: (addons-966657) DBG |     </dhcp>
	I0819 18:36:41.075193  438797 main.go:141] libmachine: (addons-966657) DBG |   </ip>
	I0819 18:36:41.075233  438797 main.go:141] libmachine: (addons-966657) DBG |   
	I0819 18:36:41.075260  438797 main.go:141] libmachine: (addons-966657) DBG | </network>
	I0819 18:36:41.075321  438797 main.go:141] libmachine: (addons-966657) DBG | 
	I0819 18:36:41.080744  438797 main.go:141] libmachine: (addons-966657) DBG | trying to create private KVM network mk-addons-966657 192.168.39.0/24...
	I0819 18:36:41.153960  438797 main.go:141] libmachine: (addons-966657) DBG | private KVM network mk-addons-966657 192.168.39.0/24 created
	I0819 18:36:41.153996  438797 main.go:141] libmachine: (addons-966657) Setting up store path in /home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657 ...
	I0819 18:36:41.154018  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:41.153937  438819 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 18:36:41.154032  438797 main.go:141] libmachine: (addons-966657) Building disk image from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 18:36:41.154197  438797 main.go:141] libmachine: (addons-966657) Downloading /home/jenkins/minikube-integration/19423-430949/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 18:36:41.414175  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:41.413996  438819 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa...
	I0819 18:36:41.498202  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:41.498038  438819 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/addons-966657.rawdisk...
	I0819 18:36:41.498237  438797 main.go:141] libmachine: (addons-966657) DBG | Writing magic tar header
	I0819 18:36:41.498248  438797 main.go:141] libmachine: (addons-966657) DBG | Writing SSH key tar header
	I0819 18:36:41.498257  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:41.498154  438819 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657 ...
	I0819 18:36:41.498268  438797 main.go:141] libmachine: (addons-966657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657
	I0819 18:36:41.498338  438797 main.go:141] libmachine: (addons-966657) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657 (perms=drwx------)
	I0819 18:36:41.498366  438797 main.go:141] libmachine: (addons-966657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines
	I0819 18:36:41.498379  438797 main.go:141] libmachine: (addons-966657) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines (perms=drwxr-xr-x)
	I0819 18:36:41.498387  438797 main.go:141] libmachine: (addons-966657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 18:36:41.498399  438797 main.go:141] libmachine: (addons-966657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949
	I0819 18:36:41.498405  438797 main.go:141] libmachine: (addons-966657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 18:36:41.498414  438797 main.go:141] libmachine: (addons-966657) DBG | Checking permissions on dir: /home/jenkins
	I0819 18:36:41.498422  438797 main.go:141] libmachine: (addons-966657) DBG | Checking permissions on dir: /home
	I0819 18:36:41.498438  438797 main.go:141] libmachine: (addons-966657) DBG | Skipping /home - not owner
	I0819 18:36:41.498458  438797 main.go:141] libmachine: (addons-966657) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube (perms=drwxr-xr-x)
	I0819 18:36:41.498473  438797 main.go:141] libmachine: (addons-966657) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949 (perms=drwxrwxr-x)
	I0819 18:36:41.498486  438797 main.go:141] libmachine: (addons-966657) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 18:36:41.498497  438797 main.go:141] libmachine: (addons-966657) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 18:36:41.498502  438797 main.go:141] libmachine: (addons-966657) Creating domain...
	I0819 18:36:41.499540  438797 main.go:141] libmachine: (addons-966657) define libvirt domain using xml: 
	I0819 18:36:41.499570  438797 main.go:141] libmachine: (addons-966657) <domain type='kvm'>
	I0819 18:36:41.499583  438797 main.go:141] libmachine: (addons-966657)   <name>addons-966657</name>
	I0819 18:36:41.499592  438797 main.go:141] libmachine: (addons-966657)   <memory unit='MiB'>4000</memory>
	I0819 18:36:41.499608  438797 main.go:141] libmachine: (addons-966657)   <vcpu>2</vcpu>
	I0819 18:36:41.499620  438797 main.go:141] libmachine: (addons-966657)   <features>
	I0819 18:36:41.499656  438797 main.go:141] libmachine: (addons-966657)     <acpi/>
	I0819 18:36:41.499680  438797 main.go:141] libmachine: (addons-966657)     <apic/>
	I0819 18:36:41.499690  438797 main.go:141] libmachine: (addons-966657)     <pae/>
	I0819 18:36:41.499697  438797 main.go:141] libmachine: (addons-966657)     
	I0819 18:36:41.499706  438797 main.go:141] libmachine: (addons-966657)   </features>
	I0819 18:36:41.499718  438797 main.go:141] libmachine: (addons-966657)   <cpu mode='host-passthrough'>
	I0819 18:36:41.499727  438797 main.go:141] libmachine: (addons-966657)   
	I0819 18:36:41.499735  438797 main.go:141] libmachine: (addons-966657)   </cpu>
	I0819 18:36:41.499744  438797 main.go:141] libmachine: (addons-966657)   <os>
	I0819 18:36:41.499749  438797 main.go:141] libmachine: (addons-966657)     <type>hvm</type>
	I0819 18:36:41.499763  438797 main.go:141] libmachine: (addons-966657)     <boot dev='cdrom'/>
	I0819 18:36:41.499785  438797 main.go:141] libmachine: (addons-966657)     <boot dev='hd'/>
	I0819 18:36:41.499798  438797 main.go:141] libmachine: (addons-966657)     <bootmenu enable='no'/>
	I0819 18:36:41.499807  438797 main.go:141] libmachine: (addons-966657)   </os>
	I0819 18:36:41.499813  438797 main.go:141] libmachine: (addons-966657)   <devices>
	I0819 18:36:41.499823  438797 main.go:141] libmachine: (addons-966657)     <disk type='file' device='cdrom'>
	I0819 18:36:41.499833  438797 main.go:141] libmachine: (addons-966657)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/boot2docker.iso'/>
	I0819 18:36:41.499841  438797 main.go:141] libmachine: (addons-966657)       <target dev='hdc' bus='scsi'/>
	I0819 18:36:41.499849  438797 main.go:141] libmachine: (addons-966657)       <readonly/>
	I0819 18:36:41.499863  438797 main.go:141] libmachine: (addons-966657)     </disk>
	I0819 18:36:41.499876  438797 main.go:141] libmachine: (addons-966657)     <disk type='file' device='disk'>
	I0819 18:36:41.499889  438797 main.go:141] libmachine: (addons-966657)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 18:36:41.499903  438797 main.go:141] libmachine: (addons-966657)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/addons-966657.rawdisk'/>
	I0819 18:36:41.499910  438797 main.go:141] libmachine: (addons-966657)       <target dev='hda' bus='virtio'/>
	I0819 18:36:41.499916  438797 main.go:141] libmachine: (addons-966657)     </disk>
	I0819 18:36:41.499921  438797 main.go:141] libmachine: (addons-966657)     <interface type='network'>
	I0819 18:36:41.499929  438797 main.go:141] libmachine: (addons-966657)       <source network='mk-addons-966657'/>
	I0819 18:36:41.499940  438797 main.go:141] libmachine: (addons-966657)       <model type='virtio'/>
	I0819 18:36:41.499951  438797 main.go:141] libmachine: (addons-966657)     </interface>
	I0819 18:36:41.499963  438797 main.go:141] libmachine: (addons-966657)     <interface type='network'>
	I0819 18:36:41.499976  438797 main.go:141] libmachine: (addons-966657)       <source network='default'/>
	I0819 18:36:41.499983  438797 main.go:141] libmachine: (addons-966657)       <model type='virtio'/>
	I0819 18:36:41.499988  438797 main.go:141] libmachine: (addons-966657)     </interface>
	I0819 18:36:41.499999  438797 main.go:141] libmachine: (addons-966657)     <serial type='pty'>
	I0819 18:36:41.500006  438797 main.go:141] libmachine: (addons-966657)       <target port='0'/>
	I0819 18:36:41.500011  438797 main.go:141] libmachine: (addons-966657)     </serial>
	I0819 18:36:41.500025  438797 main.go:141] libmachine: (addons-966657)     <console type='pty'>
	I0819 18:36:41.500035  438797 main.go:141] libmachine: (addons-966657)       <target type='serial' port='0'/>
	I0819 18:36:41.500041  438797 main.go:141] libmachine: (addons-966657)     </console>
	I0819 18:36:41.500047  438797 main.go:141] libmachine: (addons-966657)     <rng model='virtio'>
	I0819 18:36:41.500056  438797 main.go:141] libmachine: (addons-966657)       <backend model='random'>/dev/random</backend>
	I0819 18:36:41.500063  438797 main.go:141] libmachine: (addons-966657)     </rng>
	I0819 18:36:41.500068  438797 main.go:141] libmachine: (addons-966657)     
	I0819 18:36:41.500075  438797 main.go:141] libmachine: (addons-966657)     
	I0819 18:36:41.500080  438797 main.go:141] libmachine: (addons-966657)   </devices>
	I0819 18:36:41.500087  438797 main.go:141] libmachine: (addons-966657) </domain>
	I0819 18:36:41.500091  438797 main.go:141] libmachine: (addons-966657) 
	I0819 18:36:41.504481  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:86:30:07 in network default
	I0819 18:36:41.505011  438797 main.go:141] libmachine: (addons-966657) Ensuring networks are active...
	I0819 18:36:41.505036  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:41.505728  438797 main.go:141] libmachine: (addons-966657) Ensuring network default is active
	I0819 18:36:41.505991  438797 main.go:141] libmachine: (addons-966657) Ensuring network mk-addons-966657 is active
	I0819 18:36:41.506421  438797 main.go:141] libmachine: (addons-966657) Getting domain xml...
	I0819 18:36:41.507078  438797 main.go:141] libmachine: (addons-966657) Creating domain...
	I0819 18:36:42.742586  438797 main.go:141] libmachine: (addons-966657) Waiting to get IP...
	I0819 18:36:42.743506  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:42.743982  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:42.744119  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:42.744060  438819 retry.go:31] will retry after 235.975158ms: waiting for machine to come up
	I0819 18:36:42.981733  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:42.982174  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:42.982200  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:42.982139  438819 retry.go:31] will retry after 356.596416ms: waiting for machine to come up
	I0819 18:36:43.340806  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:43.341250  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:43.341279  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:43.341194  438819 retry.go:31] will retry after 480.923964ms: waiting for machine to come up
	I0819 18:36:43.823921  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:43.824372  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:43.824394  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:43.824337  438819 retry.go:31] will retry after 563.24209ms: waiting for machine to come up
	I0819 18:36:44.389101  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:44.389625  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:44.389658  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:44.389589  438819 retry.go:31] will retry after 672.851827ms: waiting for machine to come up
	I0819 18:36:45.064597  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:45.065153  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:45.065184  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:45.065090  438819 retry.go:31] will retry after 736.246184ms: waiting for machine to come up
	I0819 18:36:45.803008  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:45.803518  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:45.803553  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:45.803431  438819 retry.go:31] will retry after 1.156596743s: waiting for machine to come up
	I0819 18:36:46.962034  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:46.962383  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:46.962405  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:46.962339  438819 retry.go:31] will retry after 1.255605784s: waiting for machine to come up
	I0819 18:36:48.219864  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:48.220393  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:48.220422  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:48.220343  438819 retry.go:31] will retry after 1.84715451s: waiting for machine to come up
	I0819 18:36:50.070606  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:50.071095  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:50.071130  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:50.071033  438819 retry.go:31] will retry after 1.71879158s: waiting for machine to come up
	I0819 18:36:51.791402  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:51.791849  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:51.791878  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:51.791806  438819 retry.go:31] will retry after 2.519575936s: waiting for machine to come up
	I0819 18:36:54.314700  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:54.315062  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:54.315090  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:54.315020  438819 retry.go:31] will retry after 2.837406053s: waiting for machine to come up
	I0819 18:36:57.154690  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:36:57.155142  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find current IP address of domain addons-966657 in network mk-addons-966657
	I0819 18:36:57.155167  438797 main.go:141] libmachine: (addons-966657) DBG | I0819 18:36:57.155087  438819 retry.go:31] will retry after 4.457278559s: waiting for machine to come up
	I0819 18:37:01.614178  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:01.614650  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has current primary IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:01.614673  438797 main.go:141] libmachine: (addons-966657) Found IP for machine: 192.168.39.241
	I0819 18:37:01.614716  438797 main.go:141] libmachine: (addons-966657) Reserving static IP address...
	I0819 18:37:01.615156  438797 main.go:141] libmachine: (addons-966657) DBG | unable to find host DHCP lease matching {name: "addons-966657", mac: "52:54:00:eb:04:e6", ip: "192.168.39.241"} in network mk-addons-966657
	I0819 18:37:01.702986  438797 main.go:141] libmachine: (addons-966657) DBG | Getting to WaitForSSH function...
	I0819 18:37:01.703028  438797 main.go:141] libmachine: (addons-966657) Reserved static IP address: 192.168.39.241
	I0819 18:37:01.703042  438797 main.go:141] libmachine: (addons-966657) Waiting for SSH to be available...
	I0819 18:37:01.705820  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:01.706326  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:minikube Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:01.706360  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:01.706540  438797 main.go:141] libmachine: (addons-966657) DBG | Using SSH client type: external
	I0819 18:37:01.706567  438797 main.go:141] libmachine: (addons-966657) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa (-rw-------)
	I0819 18:37:01.706600  438797 main.go:141] libmachine: (addons-966657) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:37:01.706618  438797 main.go:141] libmachine: (addons-966657) DBG | About to run SSH command:
	I0819 18:37:01.706660  438797 main.go:141] libmachine: (addons-966657) DBG | exit 0
	I0819 18:37:01.833315  438797 main.go:141] libmachine: (addons-966657) DBG | SSH cmd err, output: <nil>: 
	I0819 18:37:01.833582  438797 main.go:141] libmachine: (addons-966657) KVM machine creation complete!
	I0819 18:37:01.834072  438797 main.go:141] libmachine: (addons-966657) Calling .GetConfigRaw
	I0819 18:37:01.834690  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:01.834972  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:01.835175  438797 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 18:37:01.835196  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:01.836558  438797 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 18:37:01.836576  438797 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 18:37:01.836583  438797 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 18:37:01.836589  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:01.839730  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:01.840744  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:01.840775  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:01.841039  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:01.841287  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:01.841497  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:01.841674  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:01.841852  438797 main.go:141] libmachine: Using SSH client type: native
	I0819 18:37:01.842102  438797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0819 18:37:01.842114  438797 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 18:37:01.952493  438797 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:37:01.952526  438797 main.go:141] libmachine: Detecting the provisioner...
	I0819 18:37:01.952546  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:01.955740  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:01.956055  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:01.956081  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:01.956205  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:01.956435  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:01.956617  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:01.956793  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:01.957044  438797 main.go:141] libmachine: Using SSH client type: native
	I0819 18:37:01.957263  438797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0819 18:37:01.957274  438797 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 18:37:02.069852  438797 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 18:37:02.069941  438797 main.go:141] libmachine: found compatible host: buildroot
	I0819 18:37:02.069955  438797 main.go:141] libmachine: Provisioning with buildroot...
	I0819 18:37:02.069967  438797 main.go:141] libmachine: (addons-966657) Calling .GetMachineName
	I0819 18:37:02.070247  438797 buildroot.go:166] provisioning hostname "addons-966657"
	I0819 18:37:02.070274  438797 main.go:141] libmachine: (addons-966657) Calling .GetMachineName
	I0819 18:37:02.070478  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:02.073216  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.073694  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.073740  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.073959  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:02.074172  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.074347  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.074507  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:02.074690  438797 main.go:141] libmachine: Using SSH client type: native
	I0819 18:37:02.074870  438797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0819 18:37:02.074882  438797 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-966657 && echo "addons-966657" | sudo tee /etc/hostname
	I0819 18:37:02.198933  438797 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-966657
	
	I0819 18:37:02.198972  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:02.201983  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.202429  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.202464  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.202665  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:02.202908  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.203097  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.203291  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:02.203462  438797 main.go:141] libmachine: Using SSH client type: native
	I0819 18:37:02.203656  438797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0819 18:37:02.203677  438797 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-966657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-966657/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-966657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:37:02.322326  438797 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:37:02.322357  438797 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 18:37:02.322400  438797 buildroot.go:174] setting up certificates
	I0819 18:37:02.322412  438797 provision.go:84] configureAuth start
	I0819 18:37:02.322425  438797 main.go:141] libmachine: (addons-966657) Calling .GetMachineName
	I0819 18:37:02.322751  438797 main.go:141] libmachine: (addons-966657) Calling .GetIP
	I0819 18:37:02.325480  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.325837  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.325866  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.326032  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:02.328593  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.329006  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.329036  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.329193  438797 provision.go:143] copyHostCerts
	I0819 18:37:02.329278  438797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 18:37:02.329438  438797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 18:37:02.329537  438797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 18:37:02.329607  438797 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.addons-966657 san=[127.0.0.1 192.168.39.241 addons-966657 localhost minikube]
	I0819 18:37:02.419609  438797 provision.go:177] copyRemoteCerts
	I0819 18:37:02.419676  438797 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:37:02.419704  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:02.422454  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.422795  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.422823  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.422996  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:02.423208  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.423420  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:02.423629  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:02.507304  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 18:37:02.531928  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 18:37:02.555925  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 18:37:02.587928  438797 provision.go:87] duration metric: took 265.498055ms to configureAuth
	I0819 18:37:02.587964  438797 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:37:02.588137  438797 config.go:182] Loaded profile config "addons-966657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:37:02.588228  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:02.590947  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.591272  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.591304  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.591502  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:02.591751  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.591967  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.592101  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:02.592274  438797 main.go:141] libmachine: Using SSH client type: native
	I0819 18:37:02.592440  438797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0819 18:37:02.592455  438797 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:37:02.857116  438797 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:37:02.857172  438797 main.go:141] libmachine: Checking connection to Docker...
	I0819 18:37:02.857182  438797 main.go:141] libmachine: (addons-966657) Calling .GetURL
	I0819 18:37:02.858468  438797 main.go:141] libmachine: (addons-966657) DBG | Using libvirt version 6000000
	I0819 18:37:02.860751  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.861066  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.861089  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.861276  438797 main.go:141] libmachine: Docker is up and running!
	I0819 18:37:02.861292  438797 main.go:141] libmachine: Reticulating splines...
	I0819 18:37:02.861302  438797 client.go:171] duration metric: took 22.112640246s to LocalClient.Create
	I0819 18:37:02.861335  438797 start.go:167] duration metric: took 22.112712107s to libmachine.API.Create "addons-966657"
	I0819 18:37:02.861358  438797 start.go:293] postStartSetup for "addons-966657" (driver="kvm2")
	I0819 18:37:02.861375  438797 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:37:02.861397  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:02.861651  438797 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:37:02.861676  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:02.863904  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.864206  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.864229  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.864429  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:02.864618  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.864794  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:02.864923  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:02.951514  438797 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:37:02.955961  438797 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:37:02.955999  438797 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 18:37:02.956090  438797 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 18:37:02.956119  438797 start.go:296] duration metric: took 94.752426ms for postStartSetup
	I0819 18:37:02.956160  438797 main.go:141] libmachine: (addons-966657) Calling .GetConfigRaw
	I0819 18:37:02.956773  438797 main.go:141] libmachine: (addons-966657) Calling .GetIP
	I0819 18:37:02.959255  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.959593  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.959629  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.959883  438797 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/config.json ...
	I0819 18:37:02.960089  438797 start.go:128] duration metric: took 22.229939651s to createHost
	I0819 18:37:02.960114  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:02.962473  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.962819  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:02.962854  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:02.963088  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:02.963307  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.963488  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:02.963612  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:02.963809  438797 main.go:141] libmachine: Using SSH client type: native
	I0819 18:37:02.963999  438797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0819 18:37:02.964013  438797 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:37:03.074030  438797 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724092623.049141146
	
	I0819 18:37:03.074085  438797 fix.go:216] guest clock: 1724092623.049141146
	I0819 18:37:03.074094  438797 fix.go:229] Guest: 2024-08-19 18:37:03.049141146 +0000 UTC Remote: 2024-08-19 18:37:02.960101488 +0000 UTC m=+22.334685821 (delta=89.039658ms)
	I0819 18:37:03.074117  438797 fix.go:200] guest clock delta is within tolerance: 89.039658ms
	I0819 18:37:03.074122  438797 start.go:83] releasing machines lock for "addons-966657", held for 22.344053258s
	I0819 18:37:03.074144  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:03.074436  438797 main.go:141] libmachine: (addons-966657) Calling .GetIP
	I0819 18:37:03.077173  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:03.077527  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:03.077556  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:03.077725  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:03.078331  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:03.078485  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:03.078596  438797 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:37:03.078657  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:03.078661  438797 ssh_runner.go:195] Run: cat /version.json
	I0819 18:37:03.078679  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:03.081128  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:03.081184  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:03.081446  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:03.081473  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:03.081602  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:03.081629  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:03.081861  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:03.081865  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:03.082075  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:03.082103  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:03.082227  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:03.082233  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:03.082416  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:03.082424  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:03.183829  438797 ssh_runner.go:195] Run: systemctl --version
	I0819 18:37:03.189838  438797 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:37:03.345484  438797 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 18:37:03.351671  438797 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:37:03.351750  438797 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:37:03.367483  438797 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 18:37:03.367521  438797 start.go:495] detecting cgroup driver to use...
	I0819 18:37:03.367603  438797 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:37:03.384104  438797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:37:03.398255  438797 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:37:03.398338  438797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:37:03.411990  438797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:37:03.426022  438797 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:37:03.541519  438797 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:37:03.685317  438797 docker.go:233] disabling docker service ...
	I0819 18:37:03.685404  438797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:37:03.699532  438797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:37:03.712621  438797 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:37:03.847936  438797 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:37:03.956128  438797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:37:03.970563  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:37:03.988777  438797 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:37:03.988837  438797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:03.999707  438797 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:37:03.999784  438797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:04.010938  438797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:04.023277  438797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:04.035514  438797 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:37:04.047925  438797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:04.059090  438797 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:04.076362  438797 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:04.087873  438797 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:37:04.099496  438797 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 18:37:04.099574  438797 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 18:37:04.112666  438797 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:37:04.122993  438797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:37:04.232571  438797 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:37:04.369907  438797 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:37:04.370008  438797 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:37:04.374615  438797 start.go:563] Will wait 60s for crictl version
	I0819 18:37:04.374686  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:37:04.378628  438797 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:37:04.414542  438797 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:37:04.414637  438797 ssh_runner.go:195] Run: crio --version
	I0819 18:37:04.442323  438797 ssh_runner.go:195] Run: crio --version
	I0819 18:37:04.472624  438797 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:37:04.474056  438797 main.go:141] libmachine: (addons-966657) Calling .GetIP
	I0819 18:37:04.476724  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:04.477038  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:04.477069  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:04.477329  438797 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:37:04.481395  438797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:37:04.493811  438797 kubeadm.go:883] updating cluster {Name:addons-966657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-966657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:37:04.493933  438797 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:37:04.493979  438797 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:37:04.525512  438797 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 18:37:04.525592  438797 ssh_runner.go:195] Run: which lz4
	I0819 18:37:04.529306  438797 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 18:37:04.533344  438797 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 18:37:04.533379  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 18:37:05.715255  438797 crio.go:462] duration metric: took 1.185978933s to copy over tarball
	I0819 18:37:05.715346  438797 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 18:37:07.932024  438797 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.216645114s)
	I0819 18:37:07.932053  438797 crio.go:469] duration metric: took 2.216765379s to extract the tarball
	I0819 18:37:07.932061  438797 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 18:37:07.968177  438797 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:37:08.015293  438797 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:37:08.015328  438797 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:37:08.015337  438797 kubeadm.go:934] updating node { 192.168.39.241 8443 v1.31.0 crio true true} ...
	I0819 18:37:08.015484  438797 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-966657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-966657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:37:08.015556  438797 ssh_runner.go:195] Run: crio config
	I0819 18:37:08.066758  438797 cni.go:84] Creating CNI manager for ""
	I0819 18:37:08.066781  438797 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:37:08.066792  438797 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:37:08.066842  438797 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.241 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-966657 NodeName:addons-966657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:37:08.066978  438797 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-966657"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.241
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.241"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:37:08.067045  438797 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:37:08.077256  438797 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:37:08.077332  438797 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 18:37:08.087129  438797 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 18:37:08.105384  438797 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:37:08.122473  438797 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0819 18:37:08.140514  438797 ssh_runner.go:195] Run: grep 192.168.39.241	control-plane.minikube.internal$ /etc/hosts
	I0819 18:37:08.144383  438797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.241	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:37:08.157348  438797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:37:08.280174  438797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:37:08.298854  438797 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657 for IP: 192.168.39.241
	I0819 18:37:08.298883  438797 certs.go:194] generating shared ca certs ...
	I0819 18:37:08.298901  438797 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:08.299069  438797 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 18:37:08.375620  438797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt ...
	I0819 18:37:08.375653  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt: {Name:mk16115d9abdf6effc0b1430804b3178a06d38df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:08.375862  438797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key ...
	I0819 18:37:08.375878  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key: {Name:mk48d912f99f1dc36b0b0fc6644cc62336d64ef6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:08.375975  438797 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 18:37:08.479339  438797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt ...
	I0819 18:37:08.479373  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt: {Name:mk61277915c60e1ebd7acefaf83d0042478e62e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:08.479559  438797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key ...
	I0819 18:37:08.479577  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key: {Name:mk60e93c8cfd9bbe3e8238ba39bd3a556bacda04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:08.479673  438797 certs.go:256] generating profile certs ...
	I0819 18:37:08.479753  438797 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.key
	I0819 18:37:08.479778  438797 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt with IP's: []
	I0819 18:37:08.642672  438797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt ...
	I0819 18:37:08.642710  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: {Name:mk49593965b499436279bde5737bb16c84d1bef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:08.642901  438797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.key ...
	I0819 18:37:08.642918  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.key: {Name:mk70a6df7d74775bcc1baec44b78c3b9c382e131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:08.643011  438797 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.key.7a5737c2
	I0819 18:37:08.643040  438797 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.crt.7a5737c2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.241]
	I0819 18:37:08.730968  438797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.crt.7a5737c2 ...
	I0819 18:37:08.731006  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.crt.7a5737c2: {Name:mkcfc0f21e6a1bccadf908c525fafb3fe69fe05e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:08.731187  438797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.key.7a5737c2 ...
	I0819 18:37:08.731207  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.key.7a5737c2: {Name:mkb0b9ad344eb1dd46d88fbcf8123d3bc6e9982e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:08.731308  438797 certs.go:381] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.crt.7a5737c2 -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.crt
	I0819 18:37:08.731405  438797 certs.go:385] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.key.7a5737c2 -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.key
	I0819 18:37:08.731468  438797 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/proxy-client.key
	I0819 18:37:08.731501  438797 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/proxy-client.crt with IP's: []
	I0819 18:37:09.118243  438797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/proxy-client.crt ...
	I0819 18:37:09.118285  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/proxy-client.crt: {Name:mk99fe35ee7c9c1c7e68245e075497747f40bb1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:09.118464  438797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/proxy-client.key ...
	I0819 18:37:09.118477  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/proxy-client.key: {Name:mk53d2151bda86b6731068605674c5a506741333 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:09.118653  438797 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 18:37:09.118691  438797 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 18:37:09.118716  438797 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:37:09.118741  438797 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 18:37:09.119404  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:37:09.144039  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:37:09.167932  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:37:09.192440  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 18:37:09.216892  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 18:37:09.241222  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 18:37:09.266058  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:37:09.290620  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 18:37:09.317796  438797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:37:09.343870  438797 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:37:09.362924  438797 ssh_runner.go:195] Run: openssl version
	I0819 18:37:09.368884  438797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:37:09.381403  438797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:37:09.386113  438797 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:37:09.386185  438797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:37:09.392258  438797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:37:09.404793  438797 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:37:09.409163  438797 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 18:37:09.409232  438797 kubeadm.go:392] StartCluster: {Name:addons-966657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-966657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:37:09.409339  438797 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:37:09.409440  438797 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:37:09.448288  438797 cri.go:89] found id: ""
	I0819 18:37:09.448376  438797 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 18:37:09.458440  438797 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:37:09.468361  438797 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:37:09.480946  438797 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:37:09.480971  438797 kubeadm.go:157] found existing configuration files:
	
	I0819 18:37:09.481036  438797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:37:09.490669  438797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:37:09.490756  438797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:37:09.500169  438797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:37:09.509856  438797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:37:09.509936  438797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:37:09.519386  438797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:37:09.528628  438797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:37:09.528699  438797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:37:09.539265  438797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:37:09.548823  438797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:37:09.548907  438797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:37:09.558609  438797 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:37:09.603506  438797 kubeadm.go:310] W0819 18:37:09.585815     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:37:09.604362  438797 kubeadm.go:310] W0819 18:37:09.587028     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:37:09.720594  438797 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:37:19.299154  438797 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 18:37:19.299239  438797 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:37:19.299345  438797 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:37:19.299490  438797 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:37:19.299647  438797 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 18:37:19.299748  438797 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:37:19.301247  438797 out.go:235]   - Generating certificates and keys ...
	I0819 18:37:19.301348  438797 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:37:19.301414  438797 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:37:19.301509  438797 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 18:37:19.301586  438797 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 18:37:19.301675  438797 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 18:37:19.301753  438797 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 18:37:19.301816  438797 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 18:37:19.301917  438797 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-966657 localhost] and IPs [192.168.39.241 127.0.0.1 ::1]
	I0819 18:37:19.301962  438797 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 18:37:19.302072  438797 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-966657 localhost] and IPs [192.168.39.241 127.0.0.1 ::1]
	I0819 18:37:19.302134  438797 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 18:37:19.302188  438797 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 18:37:19.302233  438797 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 18:37:19.302280  438797 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:37:19.302323  438797 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:37:19.302372  438797 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 18:37:19.302417  438797 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:37:19.302472  438797 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:37:19.302522  438797 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:37:19.302597  438797 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:37:19.302657  438797 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:37:19.304027  438797 out.go:235]   - Booting up control plane ...
	I0819 18:37:19.304143  438797 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:37:19.304231  438797 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:37:19.304339  438797 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:37:19.304471  438797 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:37:19.304575  438797 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:37:19.304638  438797 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:37:19.304814  438797 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 18:37:19.304957  438797 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 18:37:19.305038  438797 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.860845ms
	I0819 18:37:19.305120  438797 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 18:37:19.305199  438797 kubeadm.go:310] [api-check] The API server is healthy after 5.001924893s
	I0819 18:37:19.305290  438797 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 18:37:19.305413  438797 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 18:37:19.305513  438797 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 18:37:19.305713  438797 kubeadm.go:310] [mark-control-plane] Marking the node addons-966657 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 18:37:19.305770  438797 kubeadm.go:310] [bootstrap-token] Using token: nmfv8j.mc6x4vdc2focxr3m
	I0819 18:37:19.308225  438797 out.go:235]   - Configuring RBAC rules ...
	I0819 18:37:19.308357  438797 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 18:37:19.308431  438797 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 18:37:19.308558  438797 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 18:37:19.308665  438797 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 18:37:19.308767  438797 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 18:37:19.308838  438797 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 18:37:19.308955  438797 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 18:37:19.309028  438797 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 18:37:19.309101  438797 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 18:37:19.309112  438797 kubeadm.go:310] 
	I0819 18:37:19.309206  438797 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 18:37:19.309217  438797 kubeadm.go:310] 
	I0819 18:37:19.309328  438797 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 18:37:19.309337  438797 kubeadm.go:310] 
	I0819 18:37:19.309367  438797 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 18:37:19.309425  438797 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 18:37:19.309477  438797 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 18:37:19.309484  438797 kubeadm.go:310] 
	I0819 18:37:19.309529  438797 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 18:37:19.309538  438797 kubeadm.go:310] 
	I0819 18:37:19.309584  438797 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 18:37:19.309591  438797 kubeadm.go:310] 
	I0819 18:37:19.309646  438797 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 18:37:19.309752  438797 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 18:37:19.309838  438797 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 18:37:19.309851  438797 kubeadm.go:310] 
	I0819 18:37:19.309955  438797 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 18:37:19.310042  438797 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 18:37:19.310051  438797 kubeadm.go:310] 
	I0819 18:37:19.310127  438797 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nmfv8j.mc6x4vdc2focxr3m \
	I0819 18:37:19.310221  438797 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f \
	I0819 18:37:19.310243  438797 kubeadm.go:310] 	--control-plane 
	I0819 18:37:19.310249  438797 kubeadm.go:310] 
	I0819 18:37:19.310318  438797 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 18:37:19.310326  438797 kubeadm.go:310] 
	I0819 18:37:19.310422  438797 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nmfv8j.mc6x4vdc2focxr3m \
	I0819 18:37:19.310563  438797 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f 
	I0819 18:37:19.310583  438797 cni.go:84] Creating CNI manager for ""
	I0819 18:37:19.310596  438797 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:37:19.312262  438797 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 18:37:19.313572  438797 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 18:37:19.323960  438797 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 18:37:19.342499  438797 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 18:37:19.342591  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:19.342614  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-966657 minikube.k8s.io/updated_at=2024_08_19T18_37_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8 minikube.k8s.io/name=addons-966657 minikube.k8s.io/primary=true
	I0819 18:37:19.387374  438797 ops.go:34] apiserver oom_adj: -16
	I0819 18:37:19.512794  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:20.013865  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:20.513875  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:21.013402  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:21.513013  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:22.013750  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:22.513584  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:23.012982  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:23.513614  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:24.013470  438797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:37:24.124985  438797 kubeadm.go:1113] duration metric: took 4.782464375s to wait for elevateKubeSystemPrivileges
	I0819 18:37:24.125041  438797 kubeadm.go:394] duration metric: took 14.715818031s to StartCluster
	I0819 18:37:24.125065  438797 settings.go:142] acquiring lock: {Name:mkc71def6be9966e5f008a22161bf3ed26f482b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:24.125242  438797 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 18:37:24.125675  438797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/kubeconfig: {Name:mk1ae3aa741bb2460fc0d48f80867b991b4a0677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:24.125914  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 18:37:24.125945  438797 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:37:24.126031  438797 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 18:37:24.126166  438797 config.go:182] Loaded profile config "addons-966657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:37:24.126169  438797 addons.go:69] Setting default-storageclass=true in profile "addons-966657"
	I0819 18:37:24.126185  438797 addons.go:69] Setting helm-tiller=true in profile "addons-966657"
	I0819 18:37:24.126202  438797 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-966657"
	I0819 18:37:24.126213  438797 addons.go:234] Setting addon helm-tiller=true in "addons-966657"
	I0819 18:37:24.126173  438797 addons.go:69] Setting yakd=true in profile "addons-966657"
	I0819 18:37:24.126221  438797 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-966657"
	I0819 18:37:24.126228  438797 addons.go:69] Setting cloud-spanner=true in profile "addons-966657"
	I0819 18:37:24.126239  438797 addons.go:234] Setting addon yakd=true in "addons-966657"
	I0819 18:37:24.126245  438797 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-966657"
	I0819 18:37:24.126252  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.126262  438797 addons.go:234] Setting addon cloud-spanner=true in "addons-966657"
	I0819 18:37:24.126270  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.126279  438797 addons.go:69] Setting ingress=true in profile "addons-966657"
	I0819 18:37:24.126295  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.126304  438797 addons.go:69] Setting ingress-dns=true in profile "addons-966657"
	I0819 18:37:24.126321  438797 addons.go:234] Setting addon ingress-dns=true in "addons-966657"
	I0819 18:37:24.126348  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.126180  438797 addons.go:69] Setting gcp-auth=true in profile "addons-966657"
	I0819 18:37:24.126405  438797 mustload.go:65] Loading cluster: addons-966657
	I0819 18:37:24.126565  438797 config.go:182] Loaded profile config "addons-966657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:37:24.126700  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.126731  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.126746  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.126760  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.126768  438797 addons.go:69] Setting inspektor-gadget=true in profile "addons-966657"
	I0819 18:37:24.126779  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.126789  438797 addons.go:234] Setting addon inspektor-gadget=true in "addons-966657"
	I0819 18:37:24.126793  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.126810  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.126274  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.126919  438797 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-966657"
	I0819 18:37:24.126964  438797 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-966657"
	I0819 18:37:24.126761  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.126970  438797 addons.go:69] Setting metrics-server=true in profile "addons-966657"
	I0819 18:37:24.126999  438797 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-966657"
	I0819 18:37:24.127013  438797 addons.go:234] Setting addon metrics-server=true in "addons-966657"
	I0819 18:37:24.127022  438797 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-966657"
	I0819 18:37:24.126991  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.127154  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.127180  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.127225  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.127242  438797 addons.go:69] Setting volumesnapshots=true in profile "addons-966657"
	I0819 18:37:24.127249  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.127262  438797 addons.go:234] Setting addon volumesnapshots=true in "addons-966657"
	I0819 18:37:24.127269  438797 addons.go:69] Setting volcano=true in profile "addons-966657"
	I0819 18:37:24.127284  438797 addons.go:234] Setting addon volcano=true in "addons-966657"
	I0819 18:37:24.126748  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.127294  438797 addons.go:69] Setting registry=true in profile "addons-966657"
	I0819 18:37:24.127295  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.127312  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.127315  438797 addons.go:234] Setting addon registry=true in "addons-966657"
	I0819 18:37:24.126911  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.127337  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.127364  438797 addons.go:69] Setting storage-provisioner=true in profile "addons-966657"
	I0819 18:37:24.127382  438797 addons.go:234] Setting addon storage-provisioner=true in "addons-966657"
	I0819 18:37:24.126299  438797 addons.go:234] Setting addon ingress=true in "addons-966657"
	I0819 18:37:24.127413  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.127426  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.127442  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.127472  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.127482  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.127543  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.127568  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.127854  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.127869  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.128178  438797 out.go:177] * Verifying Kubernetes components...
	I0819 18:37:24.128203  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.128259  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.128283  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.128217  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.128398  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.128740  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.128794  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.133630  438797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:37:24.148700  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36425
	I0819 18:37:24.149071  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45055
	I0819 18:37:24.149740  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.150323  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.149879  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.150091  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.150405  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.150824  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.151061  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.151085  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.151600  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.151631  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.151694  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.152323  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.152373  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.152848  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.153443  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.153498  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.157677  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.160013  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.160473  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.160511  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.167736  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38861
	I0819 18:37:24.168592  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.169260  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.169318  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.169720  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.170304  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.170379  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.175789  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41173
	I0819 18:37:24.176135  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39677
	I0819 18:37:24.176576  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.176728  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.177250  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.177281  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.177391  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.177470  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.177739  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.177883  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.177979  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.178495  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.178544  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.180035  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41365
	I0819 18:37:24.182281  438797 addons.go:234] Setting addon default-storageclass=true in "addons-966657"
	I0819 18:37:24.182347  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.182738  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.182778  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.183443  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.183554  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0819 18:37:24.184009  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34319
	I0819 18:37:24.184371  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.184854  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.184876  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.185042  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.185067  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.185390  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.185450  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.186016  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.186070  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.186718  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.186771  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.187185  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33473
	I0819 18:37:24.189553  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.190222  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.190246  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.190691  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.191310  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.191359  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.193629  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.194326  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.194353  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.194774  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.195040  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.196118  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I0819 18:37:24.196690  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.197307  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.197325  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.198167  438797 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-966657"
	I0819 18:37:24.198216  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:24.198601  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.198656  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.199716  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38403
	I0819 18:37:24.199886  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.200181  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.200511  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.200571  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.201908  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.201929  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.202537  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.203183  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.203235  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.203638  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42631
	I0819 18:37:24.204247  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.204932  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.204957  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.205512  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.206083  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.206113  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.215728  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42997
	I0819 18:37:24.216039  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34173
	I0819 18:37:24.218402  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.218558  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0819 18:37:24.219230  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.219258  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.219635  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.220227  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.220276  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.220989  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.221635  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.221665  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.222070  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.222305  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.224145  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41749
	I0819 18:37:24.224165  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.224835  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.224864  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.225283  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.225509  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.226779  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36567
	I0819 18:37:24.227408  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36527
	I0819 18:37:24.227993  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.228123  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.229162  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.229183  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.229625  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.229688  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.229951  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.230265  438797 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 18:37:24.230306  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.230331  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.231305  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.231544  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.232112  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.232704  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41051
	I0819 18:37:24.233256  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.233503  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.233666  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.233828  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.233845  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.234121  438797 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 18:37:24.234624  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.235187  438797 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 18:37:24.235327  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.235366  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.235652  438797 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 18:37:24.235678  438797 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 18:37:24.235701  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.236330  438797 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 18:37:24.237178  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.237203  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.237347  438797 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 18:37:24.237984  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.238334  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.238487  438797 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 18:37:24.238510  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 18:37:24.238531  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.239316  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35741
	I0819 18:37:24.239654  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.239671  438797 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 18:37:24.239984  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.240086  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39723
	I0819 18:37:24.240304  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.240328  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.241810  438797 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 18:37:24.242998  438797 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 18:37:24.243373  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.243406  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.243431  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.243478  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.243495  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.243523  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.243803  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.244193  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.244199  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.244210  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.244217  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.244266  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.244436  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.244514  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.244776  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.244780  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38023
	I0819 18:37:24.244826  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.244870  438797 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 18:37:24.244973  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.245079  438797 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0819 18:37:24.245697  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.245743  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.245757  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.246092  438797 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0819 18:37:24.246113  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0819 18:37:24.246133  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.246452  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.246888  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39809
	I0819 18:37:24.246941  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46275
	I0819 18:37:24.247013  438797 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 18:37:24.247334  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.248133  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.248196  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:24.248222  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:24.248631  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.248637  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:24.248669  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:24.248678  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:24.248686  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:24.248694  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:24.249230  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.249254  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.249759  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:24.249775  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	W0819 18:37:24.249852  438797 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0819 18:37:24.249988  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45431
	I0819 18:37:24.250296  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.250377  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.250632  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.250710  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.251210  438797 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 18:37:24.251420  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.251234  438797 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 18:37:24.251463  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.251796  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36899
	I0819 18:37:24.252262  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.252281  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.252653  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.252671  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.252746  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.252781  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.253457  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.253490  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.254196  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.254376  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.254392  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.254448  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.254774  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.254973  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.255098  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.255978  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.255999  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.256131  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.256331  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.256506  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.256671  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.257072  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.257517  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.259107  438797 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 18:37:24.259176  438797 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 18:37:24.259255  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.259296  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.259405  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.259414  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.259588  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.259769  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.259918  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.260537  438797 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 18:37:24.260557  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 18:37:24.260575  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.261191  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.261497  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.261517  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.261736  438797 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 18:37:24.262076  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0819 18:37:24.262545  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.262854  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.263068  438797 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:37:24.263242  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.263261  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.263320  438797 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 18:37:24.263334  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 18:37:24.263358  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.263678  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.264050  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.264151  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.264189  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.264501  438797 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:37:24.264521  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 18:37:24.264541  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.264742  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.264771  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.265042  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:24.265064  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:24.265245  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.265510  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.265727  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.265790  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0819 18:37:24.266129  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.268271  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.268619  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.268729  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.268748  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.268907  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.269163  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.269340  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.269402  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.269577  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.270347  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.270378  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.270621  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.270880  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.271099  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.271305  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.271328  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.271378  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.271687  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.271877  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.273715  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.275532  438797 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 18:37:24.276715  438797 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 18:37:24.276735  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 18:37:24.276764  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.278274  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44933
	I0819 18:37:24.278721  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.279379  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.279399  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.279924  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36717
	I0819 18:37:24.280346  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.280618  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.280828  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.281119  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.281520  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.281539  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.281850  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.281870  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.281918  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.282121  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.282194  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.282831  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.283028  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.283209  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.284158  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.284225  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.286285  438797 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 18:37:24.286285  438797 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 18:37:24.287894  438797 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 18:37:24.287916  438797 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 18:37:24.287947  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.288226  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46539
	I0819 18:37:24.288633  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.289264  438797 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 18:37:24.289404  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40599
	I0819 18:37:24.289756  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.289773  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.289858  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.290119  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.290358  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.290378  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.290394  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.291718  438797 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 18:37:24.292001  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.292151  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.292321  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40049
	I0819 18:37:24.292522  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.292542  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.292890  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.292966  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.293004  438797 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 18:37:24.293020  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 18:37:24.293039  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.293048  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.293083  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39293
	I0819 18:37:24.293243  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.293331  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.293550  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.293568  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.293731  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.293883  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:24.293927  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.293997  438797 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 18:37:24.294123  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.294301  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.294539  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:24.294559  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:24.294954  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:24.295131  438797 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 18:37:24.295149  438797 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 18:37:24.295170  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.295207  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:24.296659  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.297918  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.298007  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.298283  438797 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 18:37:24.298546  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:24.298757  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.298821  438797 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 18:37:24.298841  438797 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 18:37:24.298808  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.298861  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.298884  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.298907  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.299087  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.299243  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.299399  438797 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 18:37:24.299415  438797 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 18:37:24.299433  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.299710  438797 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 18:37:24.299764  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.300325  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.300347  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.300554  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.300803  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.300947  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.301091  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.302174  438797 out.go:177]   - Using image docker.io/busybox:stable
	I0819 18:37:24.303034  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.303328  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.303429  438797 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 18:37:24.303444  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 18:37:24.303459  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:24.303483  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.303742  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.303769  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.303770  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.303935  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.303951  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.304097  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.304098  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.304181  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.304224  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.304270  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.304505  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:24.306846  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.307340  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:24.307368  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:24.307552  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:24.307772  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:24.307950  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:24.308080  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	W0819 18:37:24.317974  438797 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54264->192.168.39.241:22: read: connection reset by peer
	I0819 18:37:24.318014  438797 retry.go:31] will retry after 310.662789ms: ssh: handshake failed: read tcp 192.168.39.1:54264->192.168.39.241:22: read: connection reset by peer
	W0819 18:37:24.332819  438797 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54276->192.168.39.241:22: read: connection reset by peer
	I0819 18:37:24.332854  438797 retry.go:31] will retry after 321.912268ms: ssh: handshake failed: read tcp 192.168.39.1:54276->192.168.39.241:22: read: connection reset by peer
	W0819 18:37:24.332906  438797 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54292->192.168.39.241:22: read: connection reset by peer
	I0819 18:37:24.332912  438797 retry.go:31] will retry after 270.762609ms: ssh: handshake failed: read tcp 192.168.39.1:54292->192.168.39.241:22: read: connection reset by peer
	I0819 18:37:24.591822  438797 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0819 18:37:24.591845  438797 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0819 18:37:24.615928  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 18:37:24.643216  438797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:37:24.643261  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 18:37:24.652036  438797 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 18:37:24.652068  438797 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 18:37:24.708621  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 18:37:24.727172  438797 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 18:37:24.727205  438797 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 18:37:24.751768  438797 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 18:37:24.751790  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 18:37:24.755690  438797 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 18:37:24.755715  438797 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0819 18:37:24.762880  438797 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 18:37:24.762912  438797 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 18:37:24.764431  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 18:37:24.794316  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:37:24.798802  438797 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 18:37:24.798866  438797 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 18:37:24.844649  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 18:37:24.931835  438797 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 18:37:24.931870  438797 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 18:37:24.966646  438797 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 18:37:24.966671  438797 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 18:37:24.973163  438797 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 18:37:24.973193  438797 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 18:37:25.008176  438797 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 18:37:25.008208  438797 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 18:37:25.016675  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 18:37:25.018401  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 18:37:25.058294  438797 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 18:37:25.058329  438797 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 18:37:25.085070  438797 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 18:37:25.085106  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 18:37:25.095209  438797 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 18:37:25.095249  438797 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 18:37:25.111211  438797 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 18:37:25.111240  438797 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 18:37:25.192220  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 18:37:25.221680  438797 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 18:37:25.221719  438797 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 18:37:25.276169  438797 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 18:37:25.276204  438797 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 18:37:25.312028  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 18:37:25.319986  438797 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 18:37:25.320012  438797 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 18:37:25.322144  438797 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:37:25.322167  438797 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 18:37:25.333279  438797 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 18:37:25.333312  438797 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 18:37:25.483750  438797 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 18:37:25.483787  438797 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 18:37:25.506127  438797 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 18:37:25.506161  438797 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 18:37:25.539737  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:37:25.544159  438797 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 18:37:25.544182  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 18:37:25.561770  438797 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 18:37:25.561806  438797 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 18:37:25.700672  438797 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 18:37:25.700698  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 18:37:25.766362  438797 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 18:37:25.766397  438797 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 18:37:25.951045  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 18:37:25.974915  438797 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 18:37:25.974944  438797 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 18:37:26.023294  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 18:37:26.125523  438797 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 18:37:26.125564  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 18:37:26.279736  438797 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 18:37:26.279776  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 18:37:26.465274  438797 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 18:37:26.465307  438797 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 18:37:26.548678  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 18:37:26.837769  438797 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 18:37:26.837796  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 18:37:27.114269  438797 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 18:37:27.114293  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 18:37:27.464793  438797 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 18:37:27.464828  438797 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 18:37:27.675567  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 18:37:31.300777  438797 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 18:37:31.300829  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:31.304563  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:31.305063  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:31.305090  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:31.305315  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:31.305606  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:31.305807  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:31.306013  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:31.719945  438797 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 18:37:31.816761  438797 addons.go:234] Setting addon gcp-auth=true in "addons-966657"
	I0819 18:37:31.816824  438797 host.go:66] Checking if "addons-966657" exists ...
	I0819 18:37:31.817246  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:31.817297  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:31.833919  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45321
	I0819 18:37:31.834421  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:31.834916  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:31.834941  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:31.835314  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:31.835906  438797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:37:31.835933  438797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:37:31.852528  438797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39713
	I0819 18:37:31.852962  438797 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:37:31.853617  438797 main.go:141] libmachine: Using API Version  1
	I0819 18:37:31.853651  438797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:37:31.854121  438797 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:37:31.854348  438797 main.go:141] libmachine: (addons-966657) Calling .GetState
	I0819 18:37:31.856179  438797 main.go:141] libmachine: (addons-966657) Calling .DriverName
	I0819 18:37:31.856488  438797 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 18:37:31.856529  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHHostname
	I0819 18:37:31.860126  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:31.860598  438797 main.go:141] libmachine: (addons-966657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:04:e6", ip: ""} in network mk-addons-966657: {Iface:virbr1 ExpiryTime:2024-08-19 19:36:54 +0000 UTC Type:0 Mac:52:54:00:eb:04:e6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-966657 Clientid:01:52:54:00:eb:04:e6}
	I0819 18:37:31.860627  438797 main.go:141] libmachine: (addons-966657) DBG | domain addons-966657 has defined IP address 192.168.39.241 and MAC address 52:54:00:eb:04:e6 in network mk-addons-966657
	I0819 18:37:31.860826  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHPort
	I0819 18:37:31.861001  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHKeyPath
	I0819 18:37:31.861185  438797 main.go:141] libmachine: (addons-966657) Calling .GetSSHUsername
	I0819 18:37:31.861380  438797 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/addons-966657/id_rsa Username:docker}
	I0819 18:37:32.647425  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.031457911s)
	I0819 18:37:32.647464  438797 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.004209553s)
	I0819 18:37:32.647493  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.647508  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.647506  438797 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.004211829s)
	I0819 18:37:32.647531  438797 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 18:37:32.647615  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.938957182s)
	I0819 18:37:32.647662  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.647676  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.647722  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.883239155s)
	I0819 18:37:32.647775  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.647789  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.647789  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.853434397s)
	I0819 18:37:32.647822  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.647832  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.647881  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.803209243s)
	I0819 18:37:32.647898  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.647907  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.647980  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.631266022s)
	I0819 18:37:32.647997  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.648006  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.648097  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.629660213s)
	I0819 18:37:32.648130  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.648144  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.648539  438797 node_ready.go:35] waiting up to 6m0s for node "addons-966657" to be "Ready" ...
	I0819 18:37:32.648686  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.456439405s)
	I0819 18:37:32.648708  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.648718  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.648751  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.336688692s)
	I0819 18:37:32.648777  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.648787  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.648828  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.109062582s)
	I0819 18:37:32.648843  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.648853  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.648869  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.697794495s)
	I0819 18:37:32.648888  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.648897  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.648985  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.625656079s)
	W0819 18:37:32.649010  438797 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 18:37:32.649029  438797 retry.go:31] will retry after 127.636736ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 18:37:32.649118  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.100404995s)
	I0819 18:37:32.649155  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.649164  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.651641  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.651678  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.651693  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.651702  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.651711  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.651718  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.651719  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.651727  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.651742  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.651750  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.651808  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.651830  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.651837  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.651845  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.651852  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.651890  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.651911  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.651919  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.651925  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.651933  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.651971  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.651990  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.651997  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.652005  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.652012  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.652049  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.652069  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.652076  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.652084  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.652090  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.652128  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.652149  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.652155  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.652163  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.652170  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.652207  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.652222  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.652244  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.652250  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.652257  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.652264  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.652307  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.652316  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.652325  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.652331  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.652366  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.652391  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.652397  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.652405  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.652411  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.652449  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.652469  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.652476  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.652483  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.652489  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.653375  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.653441  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.653449  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.653461  438797 addons.go:475] Verifying addon ingress=true in "addons-966657"
	I0819 18:37:32.653586  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.653637  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.653645  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.653874  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.653905  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.653932  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.653939  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.654203  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.654234  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.654242  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.654251  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.654262  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.654328  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.654341  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.654413  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.654427  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.655010  438797 out.go:177] * Verifying ingress addon...
	I0819 18:37:32.655798  438797 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-966657 service yakd-dashboard -n yakd-dashboard
	
	I0819 18:37:32.656199  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.656243  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.656245  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.656252  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.656269  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.656305  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.656312  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.656336  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.656374  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.656383  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.656474  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.656500  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.656508  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.656518  438797 addons.go:475] Verifying addon metrics-server=true in "addons-966657"
	I0819 18:37:32.656592  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.656612  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.656620  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.656626  438797 addons.go:475] Verifying addon registry=true in "addons-966657"
	I0819 18:37:32.656653  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.656669  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.656705  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.656718  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.656730  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:32.657499  438797 out.go:177] * Verifying registry addon...
	I0819 18:37:32.657931  438797 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 18:37:32.659515  438797 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 18:37:32.702808  438797 node_ready.go:49] node "addons-966657" has status "Ready":"True"
	I0819 18:37:32.702850  438797 node_ready.go:38] duration metric: took 54.267496ms for node "addons-966657" to be "Ready" ...
	I0819 18:37:32.702863  438797 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:37:32.727199  438797 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 18:37:32.727244  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:32.727345  438797 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 18:37:32.727372  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:32.748174  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.748211  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.748542  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.748571  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	W0819 18:37:32.748680  438797 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0819 18:37:32.768455  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:32.768491  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:32.768831  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:32.768853  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:32.777738  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 18:37:32.815685  438797 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fzk2l" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:32.884383  438797 pod_ready.go:93] pod "coredns-6f6b679f8f-fzk2l" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:32.884420  438797 pod_ready.go:82] duration metric: took 68.684278ms for pod "coredns-6f6b679f8f-fzk2l" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:32.884435  438797 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-h897n" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:32.960681  438797 pod_ready.go:93] pod "coredns-6f6b679f8f-h897n" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:32.960714  438797 pod_ready.go:82] duration metric: took 76.26993ms for pod "coredns-6f6b679f8f-h897n" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:32.960727  438797 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-966657" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.011264  438797 pod_ready.go:93] pod "etcd-addons-966657" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:33.011297  438797 pod_ready.go:82] duration metric: took 50.56125ms for pod "etcd-addons-966657" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.011311  438797 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-966657" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.349154  438797 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-966657" context rescaled to 1 replicas
	I0819 18:37:33.350195  438797 pod_ready.go:93] pod "kube-apiserver-addons-966657" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:33.350216  438797 pod_ready.go:82] duration metric: took 338.897988ms for pod "kube-apiserver-addons-966657" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.350228  438797 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-966657" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.351025  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:33.351090  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:33.370438  438797 pod_ready.go:93] pod "kube-controller-manager-addons-966657" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:33.370476  438797 pod_ready.go:82] duration metric: took 20.237055ms for pod "kube-controller-manager-addons-966657" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.370492  438797 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rthg8" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.460174  438797 pod_ready.go:93] pod "kube-proxy-rthg8" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:33.460200  438797 pod_ready.go:82] duration metric: took 89.69991ms for pod "kube-proxy-rthg8" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.460213  438797 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-966657" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.670653  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:33.674131  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:33.853823  438797 pod_ready.go:93] pod "kube-scheduler-addons-966657" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:33.853858  438797 pod_ready.go:82] duration metric: took 393.635436ms for pod "kube-scheduler-addons-966657" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:33.853874  438797 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:34.198456  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:34.198833  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:34.303889  438797 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.447373147s)
	I0819 18:37:34.304003  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.628370318s)
	I0819 18:37:34.304133  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:34.304151  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:34.304505  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:34.304531  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:34.304532  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:34.304548  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:34.304565  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:34.304855  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:34.304917  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:34.304935  438797 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-966657"
	I0819 18:37:34.305715  438797 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 18:37:34.306783  438797 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 18:37:34.308321  438797 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 18:37:34.309191  438797 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 18:37:34.309349  438797 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 18:37:34.309369  438797 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 18:37:34.332068  438797 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 18:37:34.332093  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:34.376715  438797 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 18:37:34.376744  438797 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 18:37:34.436011  438797 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 18:37:34.436036  438797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 18:37:34.499351  438797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 18:37:34.665613  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:34.666189  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:34.762678  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.984877697s)
	I0819 18:37:34.762756  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:34.762781  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:34.763164  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:34.763183  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:34.763194  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:34.763202  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:34.763562  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:34.763585  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:34.763621  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:34.817118  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:35.163136  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:35.164685  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:35.315275  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:35.729078  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:35.729892  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:35.787571  438797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.288165213s)
	I0819 18:37:35.787643  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:35.787661  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:35.788003  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:35.788083  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:35.788105  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:35.788123  438797 main.go:141] libmachine: Making call to close driver server
	I0819 18:37:35.788136  438797 main.go:141] libmachine: (addons-966657) Calling .Close
	I0819 18:37:35.788381  438797 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:37:35.788400  438797 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:37:35.788403  438797 main.go:141] libmachine: (addons-966657) DBG | Closing plugin on server side
	I0819 18:37:35.790242  438797 addons.go:475] Verifying addon gcp-auth=true in "addons-966657"
	I0819 18:37:35.791774  438797 out.go:177] * Verifying gcp-auth addon...
	I0819 18:37:35.794041  438797 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 18:37:35.809576  438797 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 18:37:35.809601  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:35.819293  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:35.860120  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:36.163732  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:36.164540  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:36.298257  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:36.314565  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:36.665035  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:36.665908  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:36.797929  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:36.814770  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:37.167633  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:37.167789  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:37.297071  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:37.314122  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:37.662986  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:37.665405  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:37.798603  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:37.813969  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:38.161872  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:38.163363  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:38.298221  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:38.314265  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:38.358942  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:38.663918  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:38.665044  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:38.798273  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:38.813939  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:39.270039  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:39.270627  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:39.397251  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:39.398638  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:39.664129  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:39.664235  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:39.798254  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:39.814057  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:40.164230  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:40.165583  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:40.297275  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:40.313944  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:40.359258  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:40.663131  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:40.663501  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:40.798029  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:40.816456  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:41.162903  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:41.163680  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:41.305984  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:41.314852  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:41.825287  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:41.825513  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:41.825698  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:41.825951  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:42.172471  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:42.173004  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:42.297724  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:42.314264  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:42.360985  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:42.663852  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:42.666030  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:42.799105  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:42.814381  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:43.163257  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:43.163675  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:43.298549  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:43.314736  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:43.663896  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:43.664114  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:43.798425  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:43.814247  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:44.163656  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:44.164099  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:44.297260  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:44.314993  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:44.663065  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:44.663179  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:44.798617  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:44.818938  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:44.864305  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:45.162866  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:45.163009  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:45.298123  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:45.314935  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:45.662503  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:45.663236  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:45.797739  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:45.813970  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:46.162290  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:46.163521  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:46.299304  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:46.315049  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:46.663710  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:46.663717  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:46.797314  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:46.814144  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:47.162550  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:47.163970  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:47.297598  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:47.314614  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:47.359196  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:47.662319  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:47.663975  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:47.798281  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:47.814121  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:48.163341  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:48.164071  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:48.298170  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:48.314047  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:48.663903  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:48.664682  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:48.797523  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:48.814504  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:49.162478  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:49.164332  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:49.297802  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:49.313878  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:49.361006  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:49.714733  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:49.716385  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:49.798770  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:49.815209  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:50.162877  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:50.163116  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:50.298145  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:50.314602  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:50.662692  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:50.663897  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:50.798372  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:50.814301  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:51.165407  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:51.165761  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:51.297712  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:51.313912  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:51.666497  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:51.666768  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:51.797268  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:51.814149  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:51.859418  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:52.163687  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:52.164502  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:52.298309  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:52.314329  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:52.662887  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:52.663824  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:52.797941  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:52.814458  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:53.161949  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:53.164000  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:53.298478  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:53.314766  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:53.662549  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:53.663814  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:53.798665  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:53.813983  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:53.861355  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:54.164987  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:54.167746  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:54.298362  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:54.314321  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:54.663523  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:54.663821  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:54.798203  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:54.814529  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:55.163069  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:55.163794  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:55.297908  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:55.313503  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:55.663209  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:55.664542  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:55.798958  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:55.814128  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:56.162513  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:56.163999  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:56.298072  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:56.314872  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:56.360635  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:56.663115  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:56.663743  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:56.797849  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:56.899916  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:57.163293  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:57.163744  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:57.298240  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:57.314605  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:57.662479  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:57.663930  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:57.798708  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:57.813458  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:58.164222  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:58.164255  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:58.298519  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:58.314166  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:58.664309  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:58.664538  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:58.797993  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:58.813672  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:58.860874  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:59.162996  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:59.167582  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:59.298333  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:59.314347  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:37:59.662280  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:37:59.663332  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:37:59.797355  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:37:59.814034  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:00.162558  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:00.163536  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:00.298993  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:00.315065  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:00.663157  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:00.663301  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:00.797996  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:00.814188  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:01.163391  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:01.166470  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:01.296983  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:01.313232  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:01.360375  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:01.663376  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:01.674870  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:01.797699  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:01.813493  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:02.162651  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:02.166415  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:02.297460  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:02.315711  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:02.667224  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:02.668048  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:02.797799  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:02.814857  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:03.165005  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:03.165362  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:03.297448  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:03.314471  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:03.661883  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:03.663768  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:03.797679  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:03.813157  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:03.860259  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:04.162377  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:04.163452  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:04.298428  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:04.315747  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:04.662620  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:04.663715  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:04.797260  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:04.814032  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:05.163376  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:05.163795  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:05.298258  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:05.315189  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:05.662336  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:05.663775  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:05.798329  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:05.814294  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:06.163343  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:06.164896  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:06.298251  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:06.315443  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:06.364021  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:06.661937  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:06.664194  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:06.798245  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:06.813707  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:07.162627  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:07.163567  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:07.297659  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:07.313168  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:07.662273  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:07.663410  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 18:38:07.798736  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:07.814406  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:08.163513  438797 kapi.go:107] duration metric: took 35.503993674s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 18:38:08.164892  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:08.297791  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:08.314795  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:08.661927  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:08.798039  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:08.814314  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:08.860432  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:09.162905  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:09.298922  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:09.321824  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:09.662897  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:09.798274  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:09.814643  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:10.162642  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:10.297010  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:10.314652  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:10.663185  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:10.799793  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:10.814340  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:11.164049  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:11.298077  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:11.320867  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:11.364171  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:11.662255  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:11.798177  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:11.813936  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:12.161918  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:12.297500  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:12.314333  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:12.663257  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:12.798636  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:12.814675  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:13.162439  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:13.298300  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:13.319085  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:13.375892  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:13.664001  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:13.797532  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:13.813931  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:14.162534  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:14.297924  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:14.313845  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:14.662603  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:14.798242  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:14.813948  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:15.163036  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:15.297607  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:15.315189  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:15.662602  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:15.797711  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:15.814101  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:15.861648  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:16.162598  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:16.301105  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:16.315399  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:16.661954  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:16.797985  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:16.814124  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:17.545065  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:17.545616  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:17.546218  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:17.663006  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:17.797746  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:17.813982  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:18.163295  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:18.298203  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:18.314744  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:18.361196  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:18.662073  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:18.797883  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:18.813780  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:19.162523  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:19.304812  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:19.322776  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:19.663504  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:19.798180  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:19.814360  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:20.429771  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:20.430164  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:20.430337  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:20.430567  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:20.663064  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:20.799672  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:20.814637  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:21.162131  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:21.298267  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:21.314822  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:21.663453  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:21.797428  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:21.814525  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:22.163242  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:22.299162  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:22.400766  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:22.663207  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:22.798151  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:22.814339  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:22.860224  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:23.162636  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:23.297748  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:23.313334  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:23.661980  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:23.801829  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:23.813025  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:24.163337  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:24.297653  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:24.315140  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:24.662078  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:24.797887  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:24.814006  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:24.860745  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:25.167690  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:25.298317  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:25.314138  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:25.661912  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:25.798049  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:25.813953  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:26.166795  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:26.298405  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:26.314377  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:26.663192  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:26.797475  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:26.813982  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:26.872573  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:27.164167  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:27.298947  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:27.313675  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:27.662622  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:27.801449  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:27.814618  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:28.162087  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:28.297092  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:28.314334  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:28.669076  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:28.797584  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:28.815883  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:29.165957  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:29.298302  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:29.314499  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:29.360483  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:29.665721  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:29.799112  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:29.818048  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:30.164152  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:30.302436  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:30.315603  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:30.665010  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:30.798888  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:30.815476  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:31.163415  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:31.298352  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:31.316642  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:31.363183  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:31.662054  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:31.798338  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:31.814166  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:32.163746  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:32.298767  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:32.318404  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:32.663513  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:32.797786  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:32.813655  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:33.163948  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:33.298605  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:33.315898  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:33.371082  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:33.665949  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:33.799610  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:33.814030  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:34.163136  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:34.297596  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:34.314688  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:34.662405  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:34.797458  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:34.813882  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:35.164106  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:35.297932  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:35.315303  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:35.663213  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:35.797184  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:35.814031  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:35.860922  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:36.162749  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:36.297294  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:36.314111  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:36.679256  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:36.798807  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:36.901294  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 18:38:37.162901  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:37.297957  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:37.313971  438797 kapi.go:107] duration metric: took 1m3.004776059s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 18:38:37.662209  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:37.798513  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:37.874687  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:38.427743  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:38.430044  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:38.663741  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:38.798986  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:39.162507  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:39.297424  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:39.662593  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:39.797993  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:40.162215  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:40.298584  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:40.359501  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:40.662305  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:40.966022  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:41.163112  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:41.297230  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:41.662467  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:41.798476  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:42.163240  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:42.298064  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:42.361146  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:42.661939  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:42.798683  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:43.162456  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:43.299294  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:43.663039  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:43.797462  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:44.163384  438797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 18:38:44.298479  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:44.361389  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:44.663146  438797 kapi.go:107] duration metric: took 1m12.005213137s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 18:38:44.799974  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:45.298831  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:45.797173  438797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 18:38:46.298332  438797 kapi.go:107] duration metric: took 1m10.504287763s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 18:38:46.299998  438797 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-966657 cluster.
	I0819 18:38:46.301312  438797 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 18:38:46.302586  438797 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 18:38:46.303915  438797 out.go:177] * Enabled addons: nvidia-device-plugin, helm-tiller, cloud-spanner, ingress-dns, metrics-server, inspektor-gadget, storage-provisioner, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0819 18:38:46.305076  438797 addons.go:510] duration metric: took 1m22.179069136s for enable addons: enabled=[nvidia-device-plugin helm-tiller cloud-spanner ingress-dns metrics-server inspektor-gadget storage-provisioner yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0819 18:38:46.361484  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:48.860418  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:51.362075  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:53.860225  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:55.860329  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:38:57.862204  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:00.361106  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:02.860540  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:05.360976  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:07.861150  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:10.359698  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:12.362412  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:14.860896  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:17.360857  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:19.860955  438797 pod_ready.go:103] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"False"
	I0819 18:39:20.361179  438797 pod_ready.go:93] pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace has status "Ready":"True"
	I0819 18:39:20.361206  438797 pod_ready.go:82] duration metric: took 1m46.507324914s for pod "metrics-server-8988944d9-56ss9" in "kube-system" namespace to be "Ready" ...
	I0819 18:39:20.361219  438797 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-pndfn" in "kube-system" namespace to be "Ready" ...
	I0819 18:39:20.367550  438797 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-pndfn" in "kube-system" namespace has status "Ready":"True"
	I0819 18:39:20.367583  438797 pod_ready.go:82] duration metric: took 6.357166ms for pod "nvidia-device-plugin-daemonset-pndfn" in "kube-system" namespace to be "Ready" ...
	I0819 18:39:20.367605  438797 pod_ready.go:39] duration metric: took 1m47.664730452s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:39:20.367625  438797 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:39:20.367656  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:39:20.367726  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:39:20.411676  438797 cri.go:89] found id: "da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677"
	I0819 18:39:20.411700  438797 cri.go:89] found id: ""
	I0819 18:39:20.411709  438797 logs.go:276] 1 containers: [da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677]
	I0819 18:39:20.411761  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:20.416138  438797 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:39:20.416206  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:39:20.454911  438797 cri.go:89] found id: "48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8"
	I0819 18:39:20.454936  438797 cri.go:89] found id: ""
	I0819 18:39:20.454944  438797 logs.go:276] 1 containers: [48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8]
	I0819 18:39:20.454994  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:20.459349  438797 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:39:20.459419  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:39:20.502874  438797 cri.go:89] found id: "197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc"
	I0819 18:39:20.502903  438797 cri.go:89] found id: ""
	I0819 18:39:20.502912  438797 logs.go:276] 1 containers: [197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc]
	I0819 18:39:20.502962  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:20.507279  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:39:20.507345  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:39:20.549289  438797 cri.go:89] found id: "56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685"
	I0819 18:39:20.549322  438797 cri.go:89] found id: ""
	I0819 18:39:20.549334  438797 logs.go:276] 1 containers: [56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685]
	I0819 18:39:20.549402  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:20.553374  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:39:20.553445  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:39:20.603168  438797 cri.go:89] found id: "f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc"
	I0819 18:39:20.603194  438797 cri.go:89] found id: ""
	I0819 18:39:20.603203  438797 logs.go:276] 1 containers: [f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc]
	I0819 18:39:20.603259  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:20.608087  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:39:20.608172  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:39:20.652582  438797 cri.go:89] found id: "ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892"
	I0819 18:39:20.652614  438797 cri.go:89] found id: ""
	I0819 18:39:20.652623  438797 logs.go:276] 1 containers: [ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892]
	I0819 18:39:20.652679  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:20.656708  438797 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:39:20.656804  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:39:20.698521  438797 cri.go:89] found id: ""
	I0819 18:39:20.698561  438797 logs.go:276] 0 containers: []
	W0819 18:39:20.698573  438797 logs.go:278] No container was found matching "kindnet"
	I0819 18:39:20.698587  438797 logs.go:123] Gathering logs for kubelet ...
	I0819 18:39:20.698603  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 18:39:20.744623  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:25 addons-966657 kubelet[1230]: W0819 18:37:25.870712    1230 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-966657" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-966657' and this object
	W0819 18:39:20.744798  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:25 addons-966657 kubelet[1230]: E0819 18:37:25.870758    1230 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:20.748613  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.882611    1230 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:20.748778  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.882652    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:20.748911  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.884604    1230 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:20.749074  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.884711    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	I0819 18:39:20.781709  438797 logs.go:123] Gathering logs for dmesg ...
	I0819 18:39:20.781746  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:39:20.797417  438797 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:39:20.797451  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:39:20.929424  438797 logs.go:123] Gathering logs for etcd [48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8] ...
	I0819 18:39:20.929465  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8"
	I0819 18:39:20.983555  438797 logs.go:123] Gathering logs for kube-apiserver [da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677] ...
	I0819 18:39:20.983600  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677"
	I0819 18:39:21.040017  438797 logs.go:123] Gathering logs for coredns [197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc] ...
	I0819 18:39:21.040054  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc"
	I0819 18:39:21.080692  438797 logs.go:123] Gathering logs for kube-scheduler [56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685] ...
	I0819 18:39:21.080729  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685"
	I0819 18:39:21.127313  438797 logs.go:123] Gathering logs for kube-proxy [f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc] ...
	I0819 18:39:21.127355  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc"
	I0819 18:39:21.164794  438797 logs.go:123] Gathering logs for kube-controller-manager [ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892] ...
	I0819 18:39:21.164829  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892"
	I0819 18:39:21.233559  438797 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:39:21.233603  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:39:22.088599  438797 logs.go:123] Gathering logs for container status ...
	I0819 18:39:22.088657  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:39:22.136415  438797 out.go:358] Setting ErrFile to fd 2...
	I0819 18:39:22.136446  438797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 18:39:22.136506  438797 out.go:270] X Problems detected in kubelet:
	W0819 18:39:22.136519  438797 out.go:270]   Aug 19 18:37:25 addons-966657 kubelet[1230]: E0819 18:37:25.870758    1230 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:22.136526  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.882611    1230 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:22.136538  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.882652    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:22.136546  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.884604    1230 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:22.136555  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.884711    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	I0819 18:39:22.136563  438797 out.go:358] Setting ErrFile to fd 2...
	I0819 18:39:22.136573  438797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:39:32.137239  438797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:39:32.156167  438797 api_server.go:72] duration metric: took 2m8.030177255s to wait for apiserver process to appear ...
	I0819 18:39:32.156209  438797 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:39:32.156261  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:39:32.156338  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:39:32.197168  438797 cri.go:89] found id: "da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677"
	I0819 18:39:32.197198  438797 cri.go:89] found id: ""
	I0819 18:39:32.197208  438797 logs.go:276] 1 containers: [da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677]
	I0819 18:39:32.197280  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:32.201510  438797 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:39:32.201606  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:39:32.241186  438797 cri.go:89] found id: "48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8"
	I0819 18:39:32.241225  438797 cri.go:89] found id: ""
	I0819 18:39:32.241235  438797 logs.go:276] 1 containers: [48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8]
	I0819 18:39:32.241293  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:32.245892  438797 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:39:32.245981  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:39:32.295547  438797 cri.go:89] found id: "197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc"
	I0819 18:39:32.295580  438797 cri.go:89] found id: ""
	I0819 18:39:32.295590  438797 logs.go:276] 1 containers: [197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc]
	I0819 18:39:32.295654  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:32.300315  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:39:32.300403  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:39:32.340431  438797 cri.go:89] found id: "56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685"
	I0819 18:39:32.340458  438797 cri.go:89] found id: ""
	I0819 18:39:32.340467  438797 logs.go:276] 1 containers: [56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685]
	I0819 18:39:32.340519  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:32.344857  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:39:32.344934  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:39:32.393242  438797 cri.go:89] found id: "f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc"
	I0819 18:39:32.393269  438797 cri.go:89] found id: ""
	I0819 18:39:32.393279  438797 logs.go:276] 1 containers: [f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc]
	I0819 18:39:32.393346  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:32.397711  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:39:32.397797  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:39:32.436248  438797 cri.go:89] found id: "ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892"
	I0819 18:39:32.436277  438797 cri.go:89] found id: ""
	I0819 18:39:32.436286  438797 logs.go:276] 1 containers: [ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892]
	I0819 18:39:32.436355  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:32.440604  438797 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:39:32.440685  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:39:32.484235  438797 cri.go:89] found id: ""
	I0819 18:39:32.484268  438797 logs.go:276] 0 containers: []
	W0819 18:39:32.484281  438797 logs.go:278] No container was found matching "kindnet"
	I0819 18:39:32.484294  438797 logs.go:123] Gathering logs for kubelet ...
	I0819 18:39:32.484309  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 18:39:32.533994  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:25 addons-966657 kubelet[1230]: W0819 18:37:25.870712    1230 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-966657" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-966657' and this object
	W0819 18:39:32.534168  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:25 addons-966657 kubelet[1230]: E0819 18:37:25.870758    1230 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:32.538060  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.882611    1230 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:32.538227  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.882652    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:32.538361  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.884604    1230 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:32.538526  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.884711    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	I0819 18:39:32.578443  438797 logs.go:123] Gathering logs for kube-scheduler [56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685] ...
	I0819 18:39:32.578493  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685"
	I0819 18:39:32.627803  438797 logs.go:123] Gathering logs for kube-controller-manager [ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892] ...
	I0819 18:39:32.627844  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892"
	I0819 18:39:32.688319  438797 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:39:32.688362  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:39:33.678100  438797 logs.go:123] Gathering logs for container status ...
	I0819 18:39:33.678156  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:39:33.725334  438797 logs.go:123] Gathering logs for dmesg ...
	I0819 18:39:33.725378  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:39:33.740220  438797 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:39:33.740266  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:39:33.851832  438797 logs.go:123] Gathering logs for kube-apiserver [da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677] ...
	I0819 18:39:33.851880  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677"
	I0819 18:39:33.898195  438797 logs.go:123] Gathering logs for etcd [48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8] ...
	I0819 18:39:33.898233  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8"
	I0819 18:39:33.955951  438797 logs.go:123] Gathering logs for coredns [197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc] ...
	I0819 18:39:33.956000  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc"
	I0819 18:39:33.994009  438797 logs.go:123] Gathering logs for kube-proxy [f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc] ...
	I0819 18:39:33.994057  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc"
	I0819 18:39:34.031336  438797 out.go:358] Setting ErrFile to fd 2...
	I0819 18:39:34.031366  438797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 18:39:34.031428  438797 out.go:270] X Problems detected in kubelet:
	W0819 18:39:34.031448  438797 out.go:270]   Aug 19 18:37:25 addons-966657 kubelet[1230]: E0819 18:37:25.870758    1230 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:34.031459  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.882611    1230 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:34.031470  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.882652    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:34.031480  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.884604    1230 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:34.031491  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.884711    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	I0819 18:39:34.031501  438797 out.go:358] Setting ErrFile to fd 2...
	I0819 18:39:34.031511  438797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:39:44.032883  438797 api_server.go:253] Checking apiserver healthz at https://192.168.39.241:8443/healthz ...
	I0819 18:39:44.038026  438797 api_server.go:279] https://192.168.39.241:8443/healthz returned 200:
	ok
	I0819 18:39:44.039150  438797 api_server.go:141] control plane version: v1.31.0
	I0819 18:39:44.039178  438797 api_server.go:131] duration metric: took 11.88296183s to wait for apiserver health ...
	I0819 18:39:44.039186  438797 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:39:44.039208  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:39:44.039257  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:39:44.078880  438797 cri.go:89] found id: "da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677"
	I0819 18:39:44.078907  438797 cri.go:89] found id: ""
	I0819 18:39:44.078917  438797 logs.go:276] 1 containers: [da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677]
	I0819 18:39:44.078985  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:44.083366  438797 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:39:44.083443  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:39:44.131025  438797 cri.go:89] found id: "48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8"
	I0819 18:39:44.131052  438797 cri.go:89] found id: ""
	I0819 18:39:44.131062  438797 logs.go:276] 1 containers: [48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8]
	I0819 18:39:44.131128  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:44.135340  438797 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:39:44.135415  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:39:44.177560  438797 cri.go:89] found id: "197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc"
	I0819 18:39:44.177584  438797 cri.go:89] found id: ""
	I0819 18:39:44.177593  438797 logs.go:276] 1 containers: [197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc]
	I0819 18:39:44.177659  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:44.182133  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:39:44.182212  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:39:44.221541  438797 cri.go:89] found id: "56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685"
	I0819 18:39:44.221569  438797 cri.go:89] found id: ""
	I0819 18:39:44.221577  438797 logs.go:276] 1 containers: [56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685]
	I0819 18:39:44.221633  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:44.225749  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:39:44.225838  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:39:44.268699  438797 cri.go:89] found id: "f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc"
	I0819 18:39:44.268730  438797 cri.go:89] found id: ""
	I0819 18:39:44.268739  438797 logs.go:276] 1 containers: [f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc]
	I0819 18:39:44.268803  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:44.272788  438797 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:39:44.272881  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:39:44.310842  438797 cri.go:89] found id: "ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892"
	I0819 18:39:44.310876  438797 cri.go:89] found id: ""
	I0819 18:39:44.310887  438797 logs.go:276] 1 containers: [ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892]
	I0819 18:39:44.310956  438797 ssh_runner.go:195] Run: which crictl
	I0819 18:39:44.315518  438797 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:39:44.315602  438797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:39:44.352641  438797 cri.go:89] found id: ""
	I0819 18:39:44.352670  438797 logs.go:276] 0 containers: []
	W0819 18:39:44.352679  438797 logs.go:278] No container was found matching "kindnet"
	I0819 18:39:44.352688  438797 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:39:44.352701  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:39:45.384989  438797 logs.go:123] Gathering logs for container status ...
	I0819 18:39:45.385060  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:39:45.432294  438797 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:39:45.432334  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:39:45.547783  438797 logs.go:123] Gathering logs for etcd [48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8] ...
	I0819 18:39:45.547826  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8"
	I0819 18:39:45.614208  438797 logs.go:123] Gathering logs for coredns [197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc] ...
	I0819 18:39:45.614261  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc"
	I0819 18:39:45.655480  438797 logs.go:123] Gathering logs for kube-scheduler [56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685] ...
	I0819 18:39:45.655518  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685"
	I0819 18:39:45.699611  438797 logs.go:123] Gathering logs for kube-proxy [f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc] ...
	I0819 18:39:45.699655  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc"
	I0819 18:39:45.734878  438797 logs.go:123] Gathering logs for kube-controller-manager [ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892] ...
	I0819 18:39:45.734914  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892"
	I0819 18:39:45.797678  438797 logs.go:123] Gathering logs for kubelet ...
	I0819 18:39:45.797739  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 18:39:45.840397  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:25 addons-966657 kubelet[1230]: W0819 18:37:25.870712    1230 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-966657" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-966657' and this object
	W0819 18:39:45.840578  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:25 addons-966657 kubelet[1230]: E0819 18:37:25.870758    1230 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:45.844409  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.882611    1230 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:45.844602  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.882652    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:45.844735  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.884604    1230 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:45.844899  438797 logs.go:138] Found kubelet problem: Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.884711    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	I0819 18:39:45.879301  438797 logs.go:123] Gathering logs for dmesg ...
	I0819 18:39:45.879340  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:39:45.895407  438797 logs.go:123] Gathering logs for kube-apiserver [da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677] ...
	I0819 18:39:45.895444  438797 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677"
	I0819 18:39:45.951862  438797 out.go:358] Setting ErrFile to fd 2...
	I0819 18:39:45.951899  438797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 18:39:45.951973  438797 out.go:270] X Problems detected in kubelet:
	W0819 18:39:45.951981  438797 out.go:270]   Aug 19 18:37:25 addons-966657 kubelet[1230]: E0819 18:37:25.870758    1230 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:45.951988  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.882611    1230 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:45.951999  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.882652    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	W0819 18:39:45.952008  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: W0819 18:37:29.884604    1230 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-966657" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-966657' and this object
	W0819 18:39:45.952016  438797 out.go:270]   Aug 19 18:37:29 addons-966657 kubelet[1230]: E0819 18:37:29.884711    1230 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-966657\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-966657' and this object" logger="UnhandledError"
	I0819 18:39:45.952023  438797 out.go:358] Setting ErrFile to fd 2...
	I0819 18:39:45.952029  438797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:39:55.961783  438797 system_pods.go:59] 18 kube-system pods found
	I0819 18:39:55.961824  438797 system_pods.go:61] "coredns-6f6b679f8f-fzk2l" [b3f241a1-fac9-48ca-aafa-0c699106ad16] Running
	I0819 18:39:55.961830  438797 system_pods.go:61] "csi-hostpath-attacher-0" [92ae9c6d-2f1c-41d7-b221-323290b08fb6] Running
	I0819 18:39:55.961834  438797 system_pods.go:61] "csi-hostpath-resizer-0" [d4a1242b-62ac-48ca-8aaa-3721c77678af] Running
	I0819 18:39:55.961838  438797 system_pods.go:61] "csi-hostpathplugin-rc72c" [f2007ce2-0f1c-494b-b7d7-7b77e3f41204] Running
	I0819 18:39:55.961842  438797 system_pods.go:61] "etcd-addons-966657" [4ba7a901-706b-467b-8544-5d6a45837b6f] Running
	I0819 18:39:55.961845  438797 system_pods.go:61] "kube-apiserver-addons-966657" [28b9be71-cbd9-42de-ab93-77a4123d1384] Running
	I0819 18:39:55.961848  438797 system_pods.go:61] "kube-controller-manager-addons-966657" [8dc3c7cb-03c1-4317-aeac-0ec1297748a0] Running
	I0819 18:39:55.961852  438797 system_pods.go:61] "kube-ingress-dns-minikube" [92385815-777d-486c-9a29-ea8247710fb6] Running
	I0819 18:39:55.961855  438797 system_pods.go:61] "kube-proxy-rthg8" [4baeaa36-3b6c-460f-8f0d-b41cc7d6ac26] Running
	I0819 18:39:55.961858  438797 system_pods.go:61] "kube-scheduler-addons-966657" [8a8c6ceb-8d89-4133-8630-a4256ee2677f] Running
	I0819 18:39:55.961860  438797 system_pods.go:61] "metrics-server-8988944d9-56ss9" [6ad30996-e1ba-4a2d-9054-f54a241e9efb] Running
	I0819 18:39:55.961864  438797 system_pods.go:61] "nvidia-device-plugin-daemonset-pndfn" [c413c9e7-9614-44c5-9845-3d2b40c62cba] Running
	I0819 18:39:55.961866  438797 system_pods.go:61] "registry-6fb4cdfc84-x89qh" [29139ceb-43bf-40ed-8a00-81e990604d2f] Running
	I0819 18:39:55.961869  438797 system_pods.go:61] "registry-proxy-jwchm" [b551e7e6-c198-454e-a913-a278aaa5bf0b] Running
	I0819 18:39:55.961873  438797 system_pods.go:61] "snapshot-controller-56fcc65765-95z9s" [8b3c99a9-f5c0-4457-ba35-4b57b693623a] Running
	I0819 18:39:55.961877  438797 system_pods.go:61] "snapshot-controller-56fcc65765-hjhg4" [25ae0391-8398-486d-9899-9a5c16b65da4] Running
	I0819 18:39:55.961880  438797 system_pods.go:61] "storage-provisioner" [f3f61185-366e-466a-8540-023b9332a231] Running
	I0819 18:39:55.961883  438797 system_pods.go:61] "tiller-deploy-b48cc5f79-vfspv" [6000c6c1-2382-4395-9752-1b553c6bd0a2] Running
	I0819 18:39:55.961890  438797 system_pods.go:74] duration metric: took 11.922697905s to wait for pod list to return data ...
	I0819 18:39:55.961897  438797 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:39:55.964736  438797 default_sa.go:45] found service account: "default"
	I0819 18:39:55.964771  438797 default_sa.go:55] duration metric: took 2.867593ms for default service account to be created ...
	I0819 18:39:55.964782  438797 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:39:55.972591  438797 system_pods.go:86] 18 kube-system pods found
	I0819 18:39:55.972625  438797 system_pods.go:89] "coredns-6f6b679f8f-fzk2l" [b3f241a1-fac9-48ca-aafa-0c699106ad16] Running
	I0819 18:39:55.972630  438797 system_pods.go:89] "csi-hostpath-attacher-0" [92ae9c6d-2f1c-41d7-b221-323290b08fb6] Running
	I0819 18:39:55.972634  438797 system_pods.go:89] "csi-hostpath-resizer-0" [d4a1242b-62ac-48ca-8aaa-3721c77678af] Running
	I0819 18:39:55.972638  438797 system_pods.go:89] "csi-hostpathplugin-rc72c" [f2007ce2-0f1c-494b-b7d7-7b77e3f41204] Running
	I0819 18:39:55.972641  438797 system_pods.go:89] "etcd-addons-966657" [4ba7a901-706b-467b-8544-5d6a45837b6f] Running
	I0819 18:39:55.972645  438797 system_pods.go:89] "kube-apiserver-addons-966657" [28b9be71-cbd9-42de-ab93-77a4123d1384] Running
	I0819 18:39:55.972648  438797 system_pods.go:89] "kube-controller-manager-addons-966657" [8dc3c7cb-03c1-4317-aeac-0ec1297748a0] Running
	I0819 18:39:55.972653  438797 system_pods.go:89] "kube-ingress-dns-minikube" [92385815-777d-486c-9a29-ea8247710fb6] Running
	I0819 18:39:55.972656  438797 system_pods.go:89] "kube-proxy-rthg8" [4baeaa36-3b6c-460f-8f0d-b41cc7d6ac26] Running
	I0819 18:39:55.972659  438797 system_pods.go:89] "kube-scheduler-addons-966657" [8a8c6ceb-8d89-4133-8630-a4256ee2677f] Running
	I0819 18:39:55.972662  438797 system_pods.go:89] "metrics-server-8988944d9-56ss9" [6ad30996-e1ba-4a2d-9054-f54a241e9efb] Running
	I0819 18:39:55.972665  438797 system_pods.go:89] "nvidia-device-plugin-daemonset-pndfn" [c413c9e7-9614-44c5-9845-3d2b40c62cba] Running
	I0819 18:39:55.972672  438797 system_pods.go:89] "registry-6fb4cdfc84-x89qh" [29139ceb-43bf-40ed-8a00-81e990604d2f] Running
	I0819 18:39:55.972675  438797 system_pods.go:89] "registry-proxy-jwchm" [b551e7e6-c198-454e-a913-a278aaa5bf0b] Running
	I0819 18:39:55.972678  438797 system_pods.go:89] "snapshot-controller-56fcc65765-95z9s" [8b3c99a9-f5c0-4457-ba35-4b57b693623a] Running
	I0819 18:39:55.972681  438797 system_pods.go:89] "snapshot-controller-56fcc65765-hjhg4" [25ae0391-8398-486d-9899-9a5c16b65da4] Running
	I0819 18:39:55.972685  438797 system_pods.go:89] "storage-provisioner" [f3f61185-366e-466a-8540-023b9332a231] Running
	I0819 18:39:55.972688  438797 system_pods.go:89] "tiller-deploy-b48cc5f79-vfspv" [6000c6c1-2382-4395-9752-1b553c6bd0a2] Running
	I0819 18:39:55.972694  438797 system_pods.go:126] duration metric: took 7.907113ms to wait for k8s-apps to be running ...
	I0819 18:39:55.972702  438797 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:39:55.972753  438797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:39:55.987949  438797 system_svc.go:56] duration metric: took 15.23428ms WaitForService to wait for kubelet
	I0819 18:39:55.988070  438797 kubeadm.go:582] duration metric: took 2m31.862008825s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:39:55.988126  438797 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:39:55.991337  438797 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:39:55.991372  438797 node_conditions.go:123] node cpu capacity is 2
	I0819 18:39:55.991390  438797 node_conditions.go:105] duration metric: took 3.258111ms to run NodePressure ...
	I0819 18:39:55.991407  438797 start.go:241] waiting for startup goroutines ...
	I0819 18:39:55.991417  438797 start.go:246] waiting for cluster config update ...
	I0819 18:39:55.991439  438797 start.go:255] writing updated cluster config ...
	I0819 18:39:55.991763  438797 ssh_runner.go:195] Run: rm -f paused
	I0819 18:39:56.046693  438797 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:39:56.048564  438797 out.go:177] * Done! kubectl is now configured to use "addons-966657" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.488599812Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724093133488571808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=328b2f2e-b61c-4079-8590-adde939ac99b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.489179532Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89ac8881-367b-4c69-b337-bc7256b27a10 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.489237137Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89ac8881-367b-4c69-b337-bc7256b27a10 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.489531765Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f68392913c1ef75853565e07e9df2d568477c89acdc7d92319c0569de3402fbc,PodSandboxId:5a2ec64f99d20e30565c1a324b4e09615fed9e9a1824e46ba1646ddef5722318,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724092977362547599,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-pk2z9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad9ba1ac-d896-4b35-a244-cb2eeaa052ab,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de02a8dc20f05b40d74c104934523b5394479cbe0bfc2a7561b7b252d2cd77b8,PodSandboxId:78482e936d69a95ce711d42ddb4bf237568a5559194d1d0c02cf99c6f95b36b5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724092838368839511,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7779fae-ee4a-477e-8939-1538b08b9407,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8ee3211c4b7ccde1e371cce8c18b036120962956af84f30af1eea165a1d299,PodSandboxId:be6fdbbd0e74c71a307c454e6d018009f7144b8fd5d78a35fbaf8d337c20b020,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724092798359155854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a470222-a47f-4dff-b
bbd-80c1c0ad3058,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1976b4f5fd018ad82870990fff98b9aacc811dacc169b3db8699d711fb9a977c,PodSandboxId:18b60c4088f3c3eeef93af9d2788b9c2b0e7bf1989da331af554ed99bf5720c8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724092700529237517,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-7rt79,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: a45c0af7-5e2d-4b0a-87e0-eca7079e429c,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fefad11c7927f5b51629838d488410c65b28c9e5cef897622aecc85ee8f5269,PodSandboxId:45c10fa74e7d57175de2368f7ed69fe5fdb39d0f5f4d939cae05f01960499a8a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724092689934674569,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-56ss9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad30996-e1ba-4a2d-9054-f54a241e9efb,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2a083912efd25f0af7a098d4001fc072b2f79f7a70dad39be5a2f71444bab6,PodSandboxId:a6c1288294afeabf56e5f2b179f122e3546f34f25581b2c969e3b25454e134a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724092651097137947,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3f61185-366e-466a-8540-023b9332a231,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc,PodSandboxId:286057f9bd85c480e302c4e60b7a3ec06a1cb6c39ab7ede8d742fda63b4f6345,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724092646288928922,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fzk2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f241a1-fac9-48ca-aafa-0c699106ad16,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc,PodSandboxId:30d55afa4ea740d9a8e2ce7eca2b534dfbae555a453daf570015af4eb1b204fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724092645112645591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rthg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4baeaa36-3b6c-460f-8f0d-b41cc7d6ac26,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685,PodSandboxId:ac2707a4235eaf409decd100f5860a6363440a764462927ad1c8bac4ca2fd2c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724092633517939036,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 717d506cf9569961af71f86e9ac27e29,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677,PodSandboxId:a83464ab9a456d1d58cc7c15f20e20a92fee5da49457aa1b5ddac7a813e649bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724092633485811297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 288b4b601d8383b517860a32412ecddd,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8,PodSandboxId:bd05ad205c9bbcc28daa7843afcb3c8d9de9553d8527af469fb03e663cf59f46,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d
0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724092633469256040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d246e74ff84d05d8ef33ef849aa3e3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892,PodSandboxId:177205b0fa8540ec961c98690a932bcc10dc74b5648ae022d3a6bff7828cd551,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7
a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724092633425469231,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a18de3373db5c4569356a3d2e49ce8b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89ac8881-367b-4c69-b337-bc7256b27a10 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.526971526Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3f6e0fe-4ee3-47d5-8a78-baaea4b2c3d8 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.527073746Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3f6e0fe-4ee3-47d5-8a78-baaea4b2c3d8 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.528243681Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a428c00f-2aa8-4b74-8c4f-1731864e4d08 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.529641489Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724093133529611707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a428c00f-2aa8-4b74-8c4f-1731864e4d08 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.530332400Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7916a8d8-a729-4cee-b16d-f960811f5cc5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.530421718Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7916a8d8-a729-4cee-b16d-f960811f5cc5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.530759332Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f68392913c1ef75853565e07e9df2d568477c89acdc7d92319c0569de3402fbc,PodSandboxId:5a2ec64f99d20e30565c1a324b4e09615fed9e9a1824e46ba1646ddef5722318,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724092977362547599,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-pk2z9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad9ba1ac-d896-4b35-a244-cb2eeaa052ab,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de02a8dc20f05b40d74c104934523b5394479cbe0bfc2a7561b7b252d2cd77b8,PodSandboxId:78482e936d69a95ce711d42ddb4bf237568a5559194d1d0c02cf99c6f95b36b5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724092838368839511,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7779fae-ee4a-477e-8939-1538b08b9407,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8ee3211c4b7ccde1e371cce8c18b036120962956af84f30af1eea165a1d299,PodSandboxId:be6fdbbd0e74c71a307c454e6d018009f7144b8fd5d78a35fbaf8d337c20b020,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724092798359155854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a470222-a47f-4dff-b
bbd-80c1c0ad3058,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1976b4f5fd018ad82870990fff98b9aacc811dacc169b3db8699d711fb9a977c,PodSandboxId:18b60c4088f3c3eeef93af9d2788b9c2b0e7bf1989da331af554ed99bf5720c8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724092700529237517,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-7rt79,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: a45c0af7-5e2d-4b0a-87e0-eca7079e429c,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fefad11c7927f5b51629838d488410c65b28c9e5cef897622aecc85ee8f5269,PodSandboxId:45c10fa74e7d57175de2368f7ed69fe5fdb39d0f5f4d939cae05f01960499a8a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724092689934674569,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-56ss9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad30996-e1ba-4a2d-9054-f54a241e9efb,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2a083912efd25f0af7a098d4001fc072b2f79f7a70dad39be5a2f71444bab6,PodSandboxId:a6c1288294afeabf56e5f2b179f122e3546f34f25581b2c969e3b25454e134a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724092651097137947,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3f61185-366e-466a-8540-023b9332a231,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc,PodSandboxId:286057f9bd85c480e302c4e60b7a3ec06a1cb6c39ab7ede8d742fda63b4f6345,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724092646288928922,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fzk2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f241a1-fac9-48ca-aafa-0c699106ad16,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc,PodSandboxId:30d55afa4ea740d9a8e2ce7eca2b534dfbae555a453daf570015af4eb1b204fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724092645112645591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rthg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4baeaa36-3b6c-460f-8f0d-b41cc7d6ac26,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685,PodSandboxId:ac2707a4235eaf409decd100f5860a6363440a764462927ad1c8bac4ca2fd2c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724092633517939036,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 717d506cf9569961af71f86e9ac27e29,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677,PodSandboxId:a83464ab9a456d1d58cc7c15f20e20a92fee5da49457aa1b5ddac7a813e649bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724092633485811297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 288b4b601d8383b517860a32412ecddd,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8,PodSandboxId:bd05ad205c9bbcc28daa7843afcb3c8d9de9553d8527af469fb03e663cf59f46,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d
0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724092633469256040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d246e74ff84d05d8ef33ef849aa3e3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892,PodSandboxId:177205b0fa8540ec961c98690a932bcc10dc74b5648ae022d3a6bff7828cd551,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7
a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724092633425469231,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a18de3373db5c4569356a3d2e49ce8b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7916a8d8-a729-4cee-b16d-f960811f5cc5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.576674945Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fe4e458-1072-4c0b-bdb1-814f9faa8d3b name=/runtime.v1.RuntimeService/Version
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.576777614Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fe4e458-1072-4c0b-bdb1-814f9faa8d3b name=/runtime.v1.RuntimeService/Version
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.578009416Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c4d9e76f-92b2-43e3-8e9c-f369fd78f8ad name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.579584450Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724093133579551121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c4d9e76f-92b2-43e3-8e9c-f369fd78f8ad name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.580143943Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68a213ca-13e7-4bbc-9db4-6692ae58d744 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.580214736Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68a213ca-13e7-4bbc-9db4-6692ae58d744 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.580517217Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f68392913c1ef75853565e07e9df2d568477c89acdc7d92319c0569de3402fbc,PodSandboxId:5a2ec64f99d20e30565c1a324b4e09615fed9e9a1824e46ba1646ddef5722318,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724092977362547599,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-pk2z9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad9ba1ac-d896-4b35-a244-cb2eeaa052ab,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de02a8dc20f05b40d74c104934523b5394479cbe0bfc2a7561b7b252d2cd77b8,PodSandboxId:78482e936d69a95ce711d42ddb4bf237568a5559194d1d0c02cf99c6f95b36b5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724092838368839511,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7779fae-ee4a-477e-8939-1538b08b9407,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8ee3211c4b7ccde1e371cce8c18b036120962956af84f30af1eea165a1d299,PodSandboxId:be6fdbbd0e74c71a307c454e6d018009f7144b8fd5d78a35fbaf8d337c20b020,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724092798359155854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a470222-a47f-4dff-b
bbd-80c1c0ad3058,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1976b4f5fd018ad82870990fff98b9aacc811dacc169b3db8699d711fb9a977c,PodSandboxId:18b60c4088f3c3eeef93af9d2788b9c2b0e7bf1989da331af554ed99bf5720c8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724092700529237517,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-7rt79,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: a45c0af7-5e2d-4b0a-87e0-eca7079e429c,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fefad11c7927f5b51629838d488410c65b28c9e5cef897622aecc85ee8f5269,PodSandboxId:45c10fa74e7d57175de2368f7ed69fe5fdb39d0f5f4d939cae05f01960499a8a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724092689934674569,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-56ss9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad30996-e1ba-4a2d-9054-f54a241e9efb,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2a083912efd25f0af7a098d4001fc072b2f79f7a70dad39be5a2f71444bab6,PodSandboxId:a6c1288294afeabf56e5f2b179f122e3546f34f25581b2c969e3b25454e134a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724092651097137947,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3f61185-366e-466a-8540-023b9332a231,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc,PodSandboxId:286057f9bd85c480e302c4e60b7a3ec06a1cb6c39ab7ede8d742fda63b4f6345,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724092646288928922,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fzk2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f241a1-fac9-48ca-aafa-0c699106ad16,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc,PodSandboxId:30d55afa4ea740d9a8e2ce7eca2b534dfbae555a453daf570015af4eb1b204fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724092645112645591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rthg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4baeaa36-3b6c-460f-8f0d-b41cc7d6ac26,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685,PodSandboxId:ac2707a4235eaf409decd100f5860a6363440a764462927ad1c8bac4ca2fd2c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724092633517939036,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 717d506cf9569961af71f86e9ac27e29,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677,PodSandboxId:a83464ab9a456d1d58cc7c15f20e20a92fee5da49457aa1b5ddac7a813e649bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724092633485811297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 288b4b601d8383b517860a32412ecddd,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8,PodSandboxId:bd05ad205c9bbcc28daa7843afcb3c8d9de9553d8527af469fb03e663cf59f46,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d
0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724092633469256040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d246e74ff84d05d8ef33ef849aa3e3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892,PodSandboxId:177205b0fa8540ec961c98690a932bcc10dc74b5648ae022d3a6bff7828cd551,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7
a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724092633425469231,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a18de3373db5c4569356a3d2e49ce8b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68a213ca-13e7-4bbc-9db4-6692ae58d744 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.617501160Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ec17b313-6562-43c1-8206-957ee34f4ff1 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.618138066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ec17b313-6562-43c1-8206-957ee34f4ff1 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.619175081Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=373c0e38-5b06-4c3f-b3f7-734c1b7f3003 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.620501722Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724093133620408630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=373c0e38-5b06-4c3f-b3f7-734c1b7f3003 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.621172697Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9739282a-078b-48e1-b179-3ca0c7d10618 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.621243005Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9739282a-078b-48e1-b179-3ca0c7d10618 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:45:33 addons-966657 crio[678]: time="2024-08-19 18:45:33.621538032Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f68392913c1ef75853565e07e9df2d568477c89acdc7d92319c0569de3402fbc,PodSandboxId:5a2ec64f99d20e30565c1a324b4e09615fed9e9a1824e46ba1646ddef5722318,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724092977362547599,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-pk2z9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad9ba1ac-d896-4b35-a244-cb2eeaa052ab,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de02a8dc20f05b40d74c104934523b5394479cbe0bfc2a7561b7b252d2cd77b8,PodSandboxId:78482e936d69a95ce711d42ddb4bf237568a5559194d1d0c02cf99c6f95b36b5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724092838368839511,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7779fae-ee4a-477e-8939-1538b08b9407,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8ee3211c4b7ccde1e371cce8c18b036120962956af84f30af1eea165a1d299,PodSandboxId:be6fdbbd0e74c71a307c454e6d018009f7144b8fd5d78a35fbaf8d337c20b020,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724092798359155854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a470222-a47f-4dff-b
bbd-80c1c0ad3058,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1976b4f5fd018ad82870990fff98b9aacc811dacc169b3db8699d711fb9a977c,PodSandboxId:18b60c4088f3c3eeef93af9d2788b9c2b0e7bf1989da331af554ed99bf5720c8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724092700529237517,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-7rt79,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: a45c0af7-5e2d-4b0a-87e0-eca7079e429c,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fefad11c7927f5b51629838d488410c65b28c9e5cef897622aecc85ee8f5269,PodSandboxId:45c10fa74e7d57175de2368f7ed69fe5fdb39d0f5f4d939cae05f01960499a8a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724092689934674569,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-56ss9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad30996-e1ba-4a2d-9054-f54a241e9efb,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2a083912efd25f0af7a098d4001fc072b2f79f7a70dad39be5a2f71444bab6,PodSandboxId:a6c1288294afeabf56e5f2b179f122e3546f34f25581b2c969e3b25454e134a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724092651097137947,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3f61185-366e-466a-8540-023b9332a231,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc,PodSandboxId:286057f9bd85c480e302c4e60b7a3ec06a1cb6c39ab7ede8d742fda63b4f6345,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724092646288928922,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fzk2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f241a1-fac9-48ca-aafa-0c699106ad16,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc,PodSandboxId:30d55afa4ea740d9a8e2ce7eca2b534dfbae555a453daf570015af4eb1b204fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724092645112645591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rthg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4baeaa36-3b6c-460f-8f0d-b41cc7d6ac26,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685,PodSandboxId:ac2707a4235eaf409decd100f5860a6363440a764462927ad1c8bac4ca2fd2c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724092633517939036,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 717d506cf9569961af71f86e9ac27e29,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677,PodSandboxId:a83464ab9a456d1d58cc7c15f20e20a92fee5da49457aa1b5ddac7a813e649bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724092633485811297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 288b4b601d8383b517860a32412ecddd,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8,PodSandboxId:bd05ad205c9bbcc28daa7843afcb3c8d9de9553d8527af469fb03e663cf59f46,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d
0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724092633469256040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d246e74ff84d05d8ef33ef849aa3e3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892,PodSandboxId:177205b0fa8540ec961c98690a932bcc10dc74b5648ae022d3a6bff7828cd551,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7
a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724092633425469231,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-966657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a18de3373db5c4569356a3d2e49ce8b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9739282a-078b-48e1-b179-3ca0c7d10618 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f68392913c1ef       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   5a2ec64f99d20       hello-world-app-55bf9c44b4-pk2z9
	de02a8dc20f05       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         4 minutes ago       Running             nginx                     0                   78482e936d69a       nginx
	4d8ee3211c4b7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   be6fdbbd0e74c       busybox
	1976b4f5fd018       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   18b60c4088f3c       local-path-provisioner-86d989889c-7rt79
	9fefad11c7927       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   45c10fa74e7d5       metrics-server-8988944d9-56ss9
	ea2a083912efd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   a6c1288294afe       storage-provisioner
	197c6a1ef6e6e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   286057f9bd85c       coredns-6f6b679f8f-fzk2l
	f4040c311d32e       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        8 minutes ago       Running             kube-proxy                0                   30d55afa4ea74       kube-proxy-rthg8
	56311b94f99b4       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        8 minutes ago       Running             kube-scheduler            0                   ac2707a4235ea       kube-scheduler-addons-966657
	da32522f010e9       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        8 minutes ago       Running             kube-apiserver            0                   a83464ab9a456       kube-apiserver-addons-966657
	48c646c07f67b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        8 minutes ago       Running             etcd                      0                   bd05ad205c9bb       etcd-addons-966657
	ea9adb58c21d9       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        8 minutes ago       Running             kube-controller-manager   0                   177205b0fa854       kube-controller-manager-addons-966657
	
	
	==> coredns [197c6a1ef6e6e5d77b5eaa4f8ec40ecab4f74a272869dd3878e9c494b98257fc] <==
	[INFO] 10.244.0.7:43368 - 46601 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093133s
	[INFO] 10.244.0.7:51189 - 6785 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006301s
	[INFO] 10.244.0.7:51189 - 60575 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000056339s
	[INFO] 10.244.0.7:39776 - 17391 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000054947s
	[INFO] 10.244.0.7:39776 - 22509 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000097421s
	[INFO] 10.244.0.7:58029 - 37893 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000074145s
	[INFO] 10.244.0.7:58029 - 57095 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000057243s
	[INFO] 10.244.0.7:55527 - 3863 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000095928s
	[INFO] 10.244.0.7:55527 - 63274 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000108164s
	[INFO] 10.244.0.7:45700 - 51892 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000084018s
	[INFO] 10.244.0.7:45700 - 41915 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038365s
	[INFO] 10.244.0.7:58784 - 45245 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000069768s
	[INFO] 10.244.0.7:58784 - 37823 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000036065s
	[INFO] 10.244.0.7:58667 - 62946 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000040009s
	[INFO] 10.244.0.7:58667 - 30688 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000029142s
	[INFO] 10.244.0.22:41944 - 3024 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000264702s
	[INFO] 10.244.0.22:37263 - 58847 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000072612s
	[INFO] 10.244.0.22:36429 - 42311 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000250297s
	[INFO] 10.244.0.22:55387 - 14377 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000189539s
	[INFO] 10.244.0.22:41338 - 35053 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000097156s
	[INFO] 10.244.0.22:60395 - 7931 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000133663s
	[INFO] 10.244.0.22:52321 - 45381 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000469332s
	[INFO] 10.244.0.22:58548 - 41996 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000907551s
	[INFO] 10.244.0.25:47377 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00024668s
	[INFO] 10.244.0.25:57387 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000153301s
	
	
	==> describe nodes <==
	Name:               addons-966657
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-966657
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=addons-966657
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T18_37_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-966657
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:37:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-966657
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:45:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:43:26 +0000   Mon, 19 Aug 2024 18:37:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:43:26 +0000   Mon, 19 Aug 2024 18:37:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:43:26 +0000   Mon, 19 Aug 2024 18:37:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:43:26 +0000   Mon, 19 Aug 2024 18:37:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    addons-966657
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 e37e8403430242cdba308e6f37608b12
	  System UUID:                e37e8403-4302-42cd-ba30-8e6f37608b12
	  Boot ID:                    37c397af-beed-4978-aa9c-52347a7b6c21
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  default                     hello-world-app-55bf9c44b4-pk2z9           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m39s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 coredns-6f6b679f8f-fzk2l                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m10s
	  kube-system                 etcd-addons-966657                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m15s
	  kube-system                 kube-apiserver-addons-966657               250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m15s
	  kube-system                 kube-controller-manager-addons-966657      200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m15s
	  kube-system                 kube-proxy-rthg8                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 kube-scheduler-addons-966657               100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m15s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m4s
	  local-path-storage          local-path-provisioner-86d989889c-7rt79    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m7s                   kube-proxy       
	  Normal  Starting                 8m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m21s (x8 over 8m21s)  kubelet          Node addons-966657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m21s (x8 over 8m21s)  kubelet          Node addons-966657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m21s (x7 over 8m21s)  kubelet          Node addons-966657 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m15s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m15s                  kubelet          Node addons-966657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m15s                  kubelet          Node addons-966657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m15s                  kubelet          Node addons-966657 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m14s                  kubelet          Node addons-966657 status is now: NodeReady
	  Normal  RegisteredNode           8m11s                  node-controller  Node addons-966657 event: Registered Node addons-966657 in Controller
	
	
	==> dmesg <==
	[  +6.264403] kauditd_printk_skb: 74 callbacks suppressed
	[ +10.483896] kauditd_printk_skb: 12 callbacks suppressed
	[Aug19 18:38] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.588865] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.087675] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.261550] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.172549] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.183281] kauditd_printk_skb: 78 callbacks suppressed
	[  +6.631030] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.316158] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.169179] kauditd_printk_skb: 28 callbacks suppressed
	[Aug19 18:39] kauditd_printk_skb: 28 callbacks suppressed
	[Aug19 18:40] kauditd_printk_skb: 45 callbacks suppressed
	[ +10.799620] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.380656] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.110541] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.028516] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.371677] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.808182] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.334805] kauditd_printk_skb: 20 callbacks suppressed
	[Aug19 18:41] kauditd_printk_skb: 22 callbacks suppressed
	[  +7.051454] kauditd_printk_skb: 16 callbacks suppressed
	[ +10.148527] kauditd_printk_skb: 45 callbacks suppressed
	[Aug19 18:42] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.249167] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [48c646c07f67b32aa63c6041b4bf58f52b525dfe07393df8d2fb821de0eb9bf8] <==
	{"level":"info","ts":"2024-08-19T18:38:20.404737Z","caller":"traceutil/trace.go:171","msg":"trace[873809080] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1015; }","duration":"161.918979ms","start":"2024-08-19T18:38:20.242813Z","end":"2024-08-19T18:38:20.404732Z","steps":["trace[873809080] 'agreement among raft nodes before linearized reading'  (duration: 161.881395ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:38:20.404593Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.511009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:38:20.404850Z","caller":"traceutil/trace.go:171","msg":"trace[621295776] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1015; }","duration":"104.771318ms","start":"2024-08-19T18:38:20.300069Z","end":"2024-08-19T18:38:20.404840Z","steps":["trace[621295776] 'agreement among raft nodes before linearized reading'  (duration: 104.411402ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:38:20.404888Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.551775ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:38:20.404921Z","caller":"traceutil/trace.go:171","msg":"trace[322440735] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1015; }","duration":"256.583445ms","start":"2024-08-19T18:38:20.148333Z","end":"2024-08-19T18:38:20.404916Z","steps":["trace[322440735] 'agreement among raft nodes before linearized reading'  (duration: 256.544319ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:38:20.404867Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.190298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-8988944d9-56ss9.17ed352b09f160e2\" ","response":"range_response_count:1 size:813"}
	{"level":"info","ts":"2024-08-19T18:38:20.405156Z","caller":"traceutil/trace.go:171","msg":"trace[1655684797] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-8988944d9-56ss9.17ed352b09f160e2; range_end:; response_count:1; response_revision:1015; }","duration":"196.476499ms","start":"2024-08-19T18:38:20.208670Z","end":"2024-08-19T18:38:20.405147Z","steps":["trace[1655684797] 'agreement among raft nodes before linearized reading'  (duration: 196.148765ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:38:38.410871Z","caller":"traceutil/trace.go:171","msg":"trace[1839811041] linearizableReadLoop","detail":"{readStateIndex:1185; appliedIndex:1184; }","duration":"263.61331ms","start":"2024-08-19T18:38:38.147246Z","end":"2024-08-19T18:38:38.410859Z","steps":["trace[1839811041] 'read index received'  (duration: 263.493041ms)","trace[1839811041] 'applied index is now lower than readState.Index'  (duration: 118.211µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T18:38:38.411118Z","caller":"traceutil/trace.go:171","msg":"trace[2004033757] transaction","detail":"{read_only:false; response_revision:1156; number_of_response:1; }","duration":"520.108398ms","start":"2024-08-19T18:38:37.891000Z","end":"2024-08-19T18:38:38.411108Z","steps":["trace[2004033757] 'process raft request'  (duration: 519.777125ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:38:38.411215Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:38:37.890981Z","time spent":"520.160379ms","remote":"127.0.0.1:36512","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-q5gnfjrrokitcbfm5yuyhcdsa4\" mod_revision:1067 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-q5gnfjrrokitcbfm5yuyhcdsa4\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-q5gnfjrrokitcbfm5yuyhcdsa4\" > >"}
	{"level":"warn","ts":"2024-08-19T18:38:38.411330Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"264.083331ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:38:38.411347Z","caller":"traceutil/trace.go:171","msg":"trace[1308101096] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1156; }","duration":"264.101011ms","start":"2024-08-19T18:38:38.147241Z","end":"2024-08-19T18:38:38.411342Z","steps":["trace[1308101096] 'agreement among raft nodes before linearized reading'  (duration: 264.069715ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:38:38.411486Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.592848ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:38:38.411501Z","caller":"traceutil/trace.go:171","msg":"trace[906152090] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1156; }","duration":"127.608306ms","start":"2024-08-19T18:38:38.283888Z","end":"2024-08-19T18:38:38.411496Z","steps":["trace[906152090] 'agreement among raft nodes before linearized reading'  (duration: 127.584451ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:38:40.948177Z","caller":"traceutil/trace.go:171","msg":"trace[415449643] transaction","detail":"{read_only:false; response_revision:1160; number_of_response:1; }","duration":"267.679812ms","start":"2024-08-19T18:38:40.680488Z","end":"2024-08-19T18:38:40.948168Z","steps":["trace[415449643] 'process raft request'  (duration: 267.34972ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:38:40.947979Z","caller":"traceutil/trace.go:171","msg":"trace[781183671] linearizableReadLoop","detail":"{readStateIndex:1189; appliedIndex:1188; }","duration":"164.213847ms","start":"2024-08-19T18:38:40.783752Z","end":"2024-08-19T18:38:40.947966Z","steps":["trace[781183671] 'read index received'  (duration: 164.001961ms)","trace[781183671] 'applied index is now lower than readState.Index'  (duration: 211.414µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:38:40.949226Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.471905ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:38:40.949250Z","caller":"traceutil/trace.go:171","msg":"trace[607320068] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1160; }","duration":"165.511773ms","start":"2024-08-19T18:38:40.783731Z","end":"2024-08-19T18:38:40.949243Z","steps":["trace[607320068] 'agreement among raft nodes before linearized reading'  (duration: 165.454728ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:38:40.949631Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.123303ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-8988944d9-56ss9\" ","response":"range_response_count:1 size:4561"}
	{"level":"info","ts":"2024-08-19T18:38:40.949674Z","caller":"traceutil/trace.go:171","msg":"trace[617807425] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-8988944d9-56ss9; range_end:; response_count:1; response_revision:1160; }","duration":"106.170312ms","start":"2024-08-19T18:38:40.843497Z","end":"2024-08-19T18:38:40.949667Z","steps":["trace[617807425] 'agreement among raft nodes before linearized reading'  (duration: 106.053752ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:38:40.950651Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.327409ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:38:40.950712Z","caller":"traceutil/trace.go:171","msg":"trace[25983320] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1160; }","duration":"126.389804ms","start":"2024-08-19T18:38:40.824312Z","end":"2024-08-19T18:38:40.950702Z","steps":["trace[25983320] 'agreement among raft nodes before linearized reading'  (duration: 125.185404ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:40:31.061501Z","caller":"traceutil/trace.go:171","msg":"trace[1138980623] transaction","detail":"{read_only:false; response_revision:1562; number_of_response:1; }","duration":"102.7546ms","start":"2024-08-19T18:40:30.958725Z","end":"2024-08-19T18:40:31.061480Z","steps":["trace[1138980623] 'process raft request'  (duration: 102.597922ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:40:59.890595Z","caller":"traceutil/trace.go:171","msg":"trace[1933265775] transaction","detail":"{read_only:false; response_revision:1760; number_of_response:1; }","duration":"216.764463ms","start":"2024-08-19T18:40:59.673812Z","end":"2024-08-19T18:40:59.890576Z","steps":["trace[1933265775] 'process raft request'  (duration: 216.493ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:41:33.466753Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"275.145574ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9737505237065975663 > lease_revoke:<id:0722916bedd2c6a6>","response":"size:27"}
	
	
	==> kernel <==
	 18:45:34 up 8 min,  0 users,  load average: 0.13, 0.76, 0.61
	Linux addons-966657 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [da32522f010e9ad7a49d46016ec5778a3be0b58959034ea4b7a125e7f4f24677] <==
	E0819 18:39:20.251223       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.104.75:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.104.75:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.104.75:443: connect: connection refused" logger="UnhandledError"
	I0819 18:39:20.311600       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0819 18:40:05.512922       1 conn.go:339] Error on socket receive: read tcp 192.168.39.241:8443->192.168.39.1:44922: use of closed network connection
	E0819 18:40:05.704749       1 conn.go:339] Error on socket receive: read tcp 192.168.39.241:8443->192.168.39.1:44940: use of closed network connection
	E0819 18:40:23.495244       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.241:8443->10.244.0.24:58902: read: connection reset by peer
	I0819 18:40:30.195814       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0819 18:40:30.440439       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.131.123"}
	I0819 18:40:31.117332       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0819 18:40:32.244939       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0819 18:40:42.399800       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0819 18:40:55.999786       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.89.241"}
	I0819 18:41:18.410925       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:41:18.414691       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 18:41:18.434729       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:41:18.434781       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 18:41:18.464656       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:41:18.464827       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 18:41:18.471932       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:41:18.472028       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 18:41:18.513129       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:41:18.513183       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0819 18:41:19.472217       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0819 18:41:19.513721       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0819 18:41:19.597465       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0819 18:42:54.891341       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.175.220"}
	
	
	==> kube-controller-manager [ea9adb58c21d966e2ce73b98f7d494fc164581e9d02bab0ed2b222b30ab79892] <==
	E0819 18:43:15.466856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 18:43:26.233269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-966657"
	W0819 18:43:36.986344       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:43:36.986490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:43:48.209774       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:43:48.209859       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:44:06.318580       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:44:06.318702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:44:10.954120       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:44:10.954185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:44:28.264725       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:44:28.264782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:44:30.807881       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:44:30.808012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:44:46.080003       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:44:46.080133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:44:49.636243       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:44:49.636432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:45:11.311232       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:45:11.311438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:45:18.330596       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:45:18.330723       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:45:20.375643       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:45:20.375749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 18:45:32.570343       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="6.842µs"
	
	
	==> kube-proxy [f4040c311d32e1d2cbf7bd4cb3e636a7ac2d986c1cd5880519d0be6d877003dc] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 18:37:26.355820       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 18:37:26.400777       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.241"]
	E0819 18:37:26.400871       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:37:26.524250       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 18:37:26.524293       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 18:37:26.524315       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:37:26.543850       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:37:26.544129       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:37:26.544159       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:37:26.554007       1 config.go:197] "Starting service config controller"
	I0819 18:37:26.554033       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:37:26.554050       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:37:26.554053       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:37:26.554452       1 config.go:326] "Starting node config controller"
	I0819 18:37:26.554460       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:37:26.654928       1 shared_informer.go:320] Caches are synced for node config
	I0819 18:37:26.654996       1 shared_informer.go:320] Caches are synced for service config
	I0819 18:37:26.655036       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [56311b94f99b4d6eafc29bf26ac1fd36ca9238da386c4e2868a69a8976af5685] <==
	W0819 18:37:16.879186       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 18:37:16.879287       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 18:37:16.906280       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 18:37:16.906947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:16.914015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 18:37:16.914469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:17.067947       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 18:37:17.068417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:17.068928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 18:37:17.069311       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:17.103717       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 18:37:17.103781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:17.103840       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 18:37:17.103852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:17.124597       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 18:37:17.124645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:17.142738       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 18:37:17.142788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:17.153947       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 18:37:17.154018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:17.189418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 18:37:17.189467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:17.284630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 18:37:17.284680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0819 18:37:18.829236       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 18:44:48 addons-966657 kubelet[1230]: E0819 18:44:48.898525    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724093088898026148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:44:58 addons-966657 kubelet[1230]: E0819 18:44:58.901199    1230 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724093098900866536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:44:58 addons-966657 kubelet[1230]: E0819 18:44:58.901226    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724093098900866536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:45:08 addons-966657 kubelet[1230]: E0819 18:45:08.904630    1230 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724093108903535513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:45:08 addons-966657 kubelet[1230]: E0819 18:45:08.904658    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724093108903535513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:45:18 addons-966657 kubelet[1230]: E0819 18:45:18.653578    1230 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 18:45:18 addons-966657 kubelet[1230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 18:45:18 addons-966657 kubelet[1230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 18:45:18 addons-966657 kubelet[1230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 18:45:18 addons-966657 kubelet[1230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 18:45:18 addons-966657 kubelet[1230]: E0819 18:45:18.907928    1230 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724093118907539842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:45:18 addons-966657 kubelet[1230]: E0819 18:45:18.907956    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724093118907539842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:45:28 addons-966657 kubelet[1230]: E0819 18:45:28.910638    1230 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724093128910287794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:45:28 addons-966657 kubelet[1230]: E0819 18:45:28.910679    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724093128910287794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:45:32 addons-966657 kubelet[1230]: I0819 18:45:32.601220    1230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-pk2z9" podStartSLOduration=156.467398665 podStartE2EDuration="2m38.601201513s" podCreationTimestamp="2024-08-19 18:42:54 +0000 UTC" firstStartedPulling="2024-08-19 18:42:55.217608078 +0000 UTC m=+336.747752708" lastFinishedPulling="2024-08-19 18:42:57.351410927 +0000 UTC m=+338.881555556" observedRunningTime="2024-08-19 18:42:58.425941792 +0000 UTC m=+339.956086442" watchObservedRunningTime="2024-08-19 18:45:32.601201513 +0000 UTC m=+494.131346162"
	Aug 19 18:45:33 addons-966657 kubelet[1230]: I0819 18:45:33.962605    1230 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6ad30996-e1ba-4a2d-9054-f54a241e9efb-tmp-dir\") pod \"6ad30996-e1ba-4a2d-9054-f54a241e9efb\" (UID: \"6ad30996-e1ba-4a2d-9054-f54a241e9efb\") "
	Aug 19 18:45:33 addons-966657 kubelet[1230]: I0819 18:45:33.962661    1230 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqxc4\" (UniqueName: \"kubernetes.io/projected/6ad30996-e1ba-4a2d-9054-f54a241e9efb-kube-api-access-qqxc4\") pod \"6ad30996-e1ba-4a2d-9054-f54a241e9efb\" (UID: \"6ad30996-e1ba-4a2d-9054-f54a241e9efb\") "
	Aug 19 18:45:33 addons-966657 kubelet[1230]: I0819 18:45:33.963195    1230 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ad30996-e1ba-4a2d-9054-f54a241e9efb-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6ad30996-e1ba-4a2d-9054-f54a241e9efb" (UID: "6ad30996-e1ba-4a2d-9054-f54a241e9efb"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 19 18:45:33 addons-966657 kubelet[1230]: I0819 18:45:33.976658    1230 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ad30996-e1ba-4a2d-9054-f54a241e9efb-kube-api-access-qqxc4" (OuterVolumeSpecName: "kube-api-access-qqxc4") pod "6ad30996-e1ba-4a2d-9054-f54a241e9efb" (UID: "6ad30996-e1ba-4a2d-9054-f54a241e9efb"). InnerVolumeSpecName "kube-api-access-qqxc4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 18:45:33 addons-966657 kubelet[1230]: I0819 18:45:33.980926    1230 scope.go:117] "RemoveContainer" containerID="9fefad11c7927f5b51629838d488410c65b28c9e5cef897622aecc85ee8f5269"
	Aug 19 18:45:34 addons-966657 kubelet[1230]: I0819 18:45:34.019613    1230 scope.go:117] "RemoveContainer" containerID="9fefad11c7927f5b51629838d488410c65b28c9e5cef897622aecc85ee8f5269"
	Aug 19 18:45:34 addons-966657 kubelet[1230]: E0819 18:45:34.020438    1230 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fefad11c7927f5b51629838d488410c65b28c9e5cef897622aecc85ee8f5269\": container with ID starting with 9fefad11c7927f5b51629838d488410c65b28c9e5cef897622aecc85ee8f5269 not found: ID does not exist" containerID="9fefad11c7927f5b51629838d488410c65b28c9e5cef897622aecc85ee8f5269"
	Aug 19 18:45:34 addons-966657 kubelet[1230]: I0819 18:45:34.020473    1230 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fefad11c7927f5b51629838d488410c65b28c9e5cef897622aecc85ee8f5269"} err="failed to get container status \"9fefad11c7927f5b51629838d488410c65b28c9e5cef897622aecc85ee8f5269\": rpc error: code = NotFound desc = could not find container \"9fefad11c7927f5b51629838d488410c65b28c9e5cef897622aecc85ee8f5269\": container with ID starting with 9fefad11c7927f5b51629838d488410c65b28c9e5cef897622aecc85ee8f5269 not found: ID does not exist"
	Aug 19 18:45:34 addons-966657 kubelet[1230]: I0819 18:45:34.063640    1230 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6ad30996-e1ba-4a2d-9054-f54a241e9efb-tmp-dir\") on node \"addons-966657\" DevicePath \"\""
	Aug 19 18:45:34 addons-966657 kubelet[1230]: I0819 18:45:34.063690    1230 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qqxc4\" (UniqueName: \"kubernetes.io/projected/6ad30996-e1ba-4a2d-9054-f54a241e9efb-kube-api-access-qqxc4\") on node \"addons-966657\" DevicePath \"\""
	
	
	==> storage-provisioner [ea2a083912efd25f0af7a098d4001fc072b2f79f7a70dad39be5a2f71444bab6] <==
	I0819 18:37:32.612606       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 18:37:32.725501       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 18:37:32.725561       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 18:37:32.847665       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 18:37:32.847871       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-966657_f4ac03dc-fca5-4d4d-a3f4-c51e5cb1ff01!
	I0819 18:37:32.848972       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4c115955-4dc8-4394-944c-0691b9016828", APIVersion:"v1", ResourceVersion:"754", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-966657_f4ac03dc-fca5-4d4d-a3f4-c51e5cb1ff01 became leader
	I0819 18:37:33.048632       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-966657_f4ac03dc-fca5-4d4d-a3f4-c51e5cb1ff01!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-966657 -n addons-966657
helpers_test.go:261: (dbg) Run:  kubectl --context addons-966657 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (309.06s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-966657
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-966657: exit status 82 (2m0.485217519s)

                                                
                                                
-- stdout --
	* Stopping node "addons-966657"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-966657" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-966657
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-966657: exit status 11 (21.577130338s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.241:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-966657" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-966657
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-966657: exit status 11 (6.14205594s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.241:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-966657" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-966657
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-966657: exit status 11 (6.143688757s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.241:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-966657" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.35s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (834.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-124593 --alsologtostderr -v=8
E0819 18:49:57.044997  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:57.686952  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:58.968702  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:50:01.532053  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:50:06.653686  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:50:16.895500  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:50:37.377666  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:51:18.340951  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:52:40.262852  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:54:56.397803  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:55:24.105991  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:59:56.398732  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-124593 --alsologtostderr -v=8: exit status 109 (13m52.73382184s)

                                                
                                                
-- stdout --
	* [functional-124593] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "functional-124593" primary control-plane node in "functional-124593" cluster
	* Updating the running kvm2 "functional-124593" VM ...
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:49:56.790328  444547 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:49:56.790453  444547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:49:56.790459  444547 out.go:358] Setting ErrFile to fd 2...
	I0819 18:49:56.790463  444547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:49:56.790638  444547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 18:49:56.791174  444547 out.go:352] Setting JSON to false
	I0819 18:49:56.792114  444547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9148,"bootTime":1724084249,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:49:56.792181  444547 start.go:139] virtualization: kvm guest
	I0819 18:49:56.794648  444547 out.go:177] * [functional-124593] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:49:56.796256  444547 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 18:49:56.796302  444547 notify.go:220] Checking for updates...
	I0819 18:49:56.799145  444547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:49:56.800604  444547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 18:49:56.802061  444547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 18:49:56.803353  444547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:49:56.804793  444547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:49:56.806582  444547 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:49:56.806680  444547 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 18:49:56.807152  444547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:49:56.807235  444547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:49:56.823439  444547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0819 18:49:56.823898  444547 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:49:56.824445  444547 main.go:141] libmachine: Using API Version  1
	I0819 18:49:56.824484  444547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:49:56.824923  444547 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:49:56.825223  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:49:56.864107  444547 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 18:49:56.865533  444547 start.go:297] selected driver: kvm2
	I0819 18:49:56.865559  444547 start.go:901] validating driver "kvm2" against &{Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:49:56.865676  444547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:49:56.866051  444547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:49:56.866145  444547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:49:56.882415  444547 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:49:56.883177  444547 cni.go:84] Creating CNI manager for ""
	I0819 18:49:56.883193  444547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:49:56.883244  444547 start.go:340] cluster config:
	{Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-124593 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:49:56.883396  444547 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:49:56.885199  444547 out.go:177] * Starting "functional-124593" primary control-plane node in "functional-124593" cluster
	I0819 18:49:56.886649  444547 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:49:56.886699  444547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:49:56.886708  444547 cache.go:56] Caching tarball of preloaded images
	I0819 18:49:56.886828  444547 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:49:56.886844  444547 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:49:56.886977  444547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/config.json ...
	I0819 18:49:56.887255  444547 start.go:360] acquireMachinesLock for functional-124593: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:49:56.887316  444547 start.go:364] duration metric: took 31.483µs to acquireMachinesLock for "functional-124593"
	I0819 18:49:56.887333  444547 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:49:56.887345  444547 fix.go:54] fixHost starting: 
	I0819 18:49:56.887711  444547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:49:56.887765  444547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:49:56.903210  444547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43899
	I0819 18:49:56.903686  444547 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:49:56.904263  444547 main.go:141] libmachine: Using API Version  1
	I0819 18:49:56.904298  444547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:49:56.904680  444547 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:49:56.904935  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:49:56.905158  444547 main.go:141] libmachine: (functional-124593) Calling .GetState
	I0819 18:49:56.906833  444547 fix.go:112] recreateIfNeeded on functional-124593: state=Running err=<nil>
	W0819 18:49:56.906856  444547 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:49:56.908782  444547 out.go:177] * Updating the running kvm2 "functional-124593" VM ...
	I0819 18:49:56.910443  444547 machine.go:93] provisionDockerMachine start ...
	I0819 18:49:56.910478  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:49:56.910823  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:56.913259  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:56.913615  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:56.913638  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:56.913753  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:56.914043  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:56.914207  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:56.914341  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:56.914485  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:56.914684  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:56.914697  444547 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:49:57.017550  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-124593
	
	I0819 18:49:57.017585  444547 main.go:141] libmachine: (functional-124593) Calling .GetMachineName
	I0819 18:49:57.017923  444547 buildroot.go:166] provisioning hostname "functional-124593"
	I0819 18:49:57.017956  444547 main.go:141] libmachine: (functional-124593) Calling .GetMachineName
	I0819 18:49:57.018164  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.021185  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.021551  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.021598  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.021780  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.022011  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.022177  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.022309  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.022452  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:57.022654  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:57.022668  444547 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-124593 && echo "functional-124593" | sudo tee /etc/hostname
	I0819 18:49:57.141478  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-124593
	
	I0819 18:49:57.141514  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.144157  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.144414  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.144449  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.144722  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.144969  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.145192  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.145388  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.145570  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:57.145756  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:57.145776  444547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-124593' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-124593/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-124593' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:49:57.249989  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:49:57.250034  444547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 18:49:57.250086  444547 buildroot.go:174] setting up certificates
	I0819 18:49:57.250099  444547 provision.go:84] configureAuth start
	I0819 18:49:57.250118  444547 main.go:141] libmachine: (functional-124593) Calling .GetMachineName
	I0819 18:49:57.250442  444547 main.go:141] libmachine: (functional-124593) Calling .GetIP
	I0819 18:49:57.253181  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.253490  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.253519  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.253712  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.256213  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.256541  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.256586  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.256752  444547 provision.go:143] copyHostCerts
	I0819 18:49:57.256784  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 18:49:57.256824  444547 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 18:49:57.256848  444547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 18:49:57.256918  444547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 18:49:57.257021  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 18:49:57.257043  444547 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 18:49:57.257048  444547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 18:49:57.257071  444547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 18:49:57.257122  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 18:49:57.257160  444547 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 18:49:57.257176  444547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 18:49:57.257198  444547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 18:49:57.257249  444547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.functional-124593 san=[127.0.0.1 192.168.39.22 functional-124593 localhost minikube]
	I0819 18:49:57.505075  444547 provision.go:177] copyRemoteCerts
	I0819 18:49:57.505163  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:49:57.505194  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.508248  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.508654  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.508690  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.508942  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.509160  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.509381  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.509556  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:49:57.591978  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 18:49:57.592075  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 18:49:57.620626  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 18:49:57.620699  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 18:49:57.646085  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 18:49:57.646168  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 18:49:57.671918  444547 provision.go:87] duration metric: took 421.80001ms to configureAuth
	I0819 18:49:57.671954  444547 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:49:57.672176  444547 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:49:57.672267  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.675054  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.675420  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.675456  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.675667  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.675902  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.676057  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.676211  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.676410  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:57.676596  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:57.676611  444547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:50:03.241286  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:50:03.241321  444547 machine.go:96] duration metric: took 6.330855619s to provisionDockerMachine
	I0819 18:50:03.241334  444547 start.go:293] postStartSetup for "functional-124593" (driver="kvm2")
	I0819 18:50:03.241346  444547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:50:03.241368  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.241892  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:50:03.241919  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.244822  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.245262  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.245291  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.245469  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.245716  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.245889  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.246048  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:50:03.327892  444547 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:50:03.332233  444547 command_runner.go:130] > NAME=Buildroot
	I0819 18:50:03.332262  444547 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 18:50:03.332268  444547 command_runner.go:130] > ID=buildroot
	I0819 18:50:03.332276  444547 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 18:50:03.332284  444547 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 18:50:03.332381  444547 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:50:03.332400  444547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 18:50:03.332476  444547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 18:50:03.332579  444547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 18:50:03.332593  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /etc/ssl/certs/4381592.pem
	I0819 18:50:03.332685  444547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/test/nested/copy/438159/hosts -> hosts in /etc/test/nested/copy/438159
	I0819 18:50:03.332692  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/test/nested/copy/438159/hosts -> /etc/test/nested/copy/438159/hosts
	I0819 18:50:03.332732  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/438159
	I0819 18:50:03.343618  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 18:50:03.367775  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/test/nested/copy/438159/hosts --> /etc/test/nested/copy/438159/hosts (40 bytes)
	I0819 18:50:03.392035  444547 start.go:296] duration metric: took 150.684705ms for postStartSetup
	I0819 18:50:03.392093  444547 fix.go:56] duration metric: took 6.504748451s for fixHost
	I0819 18:50:03.392120  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.394902  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.395203  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.395231  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.395450  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.395682  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.395876  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.396030  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.396215  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:50:03.396420  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:50:03.396434  444547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:50:03.498031  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724093403.488650243
	
	I0819 18:50:03.498062  444547 fix.go:216] guest clock: 1724093403.488650243
	I0819 18:50:03.498069  444547 fix.go:229] Guest: 2024-08-19 18:50:03.488650243 +0000 UTC Remote: 2024-08-19 18:50:03.392098301 +0000 UTC m=+6.637869514 (delta=96.551942ms)
	I0819 18:50:03.498115  444547 fix.go:200] guest clock delta is within tolerance: 96.551942ms
	I0819 18:50:03.498121  444547 start.go:83] releasing machines lock for "functional-124593", held for 6.610795712s
	I0819 18:50:03.498146  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.498456  444547 main.go:141] libmachine: (functional-124593) Calling .GetIP
	I0819 18:50:03.501197  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.501685  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.501717  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.501963  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.502567  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.502825  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.502931  444547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:50:03.502977  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.503104  444547 ssh_runner.go:195] Run: cat /version.json
	I0819 18:50:03.503130  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.505641  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.505904  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.505942  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.505982  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.506089  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.506248  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.506286  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.506326  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.506510  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.506529  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.506705  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.506709  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:50:03.506856  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.507023  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:50:03.596444  444547 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 18:50:03.596676  444547 ssh_runner.go:195] Run: systemctl --version
	I0819 18:50:03.642156  444547 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 18:50:03.642205  444547 command_runner.go:130] > systemd 252 (252)
	I0819 18:50:03.642223  444547 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 18:50:03.642284  444547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:50:04.032467  444547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 18:50:04.057730  444547 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 18:50:04.057919  444547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:50:04.058009  444547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:50:04.094792  444547 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 18:50:04.094824  444547 start.go:495] detecting cgroup driver to use...
	I0819 18:50:04.094892  444547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:50:04.216404  444547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:50:04.250117  444547 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:50:04.250182  444547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:50:04.298450  444547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:50:04.329276  444547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:50:04.576464  444547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:50:04.796403  444547 docker.go:233] disabling docker service ...
	I0819 18:50:04.796509  444547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:50:04.824051  444547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:50:04.841929  444547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:50:05.032450  444547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:50:05.230662  444547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:50:05.261270  444547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:50:05.307751  444547 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0819 18:50:05.308002  444547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:50:05.308071  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.325985  444547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:50:05.326072  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.340857  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.355923  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.368797  444547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:50:05.384107  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.396132  444547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.407497  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.421137  444547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:50:05.431493  444547 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 18:50:05.431832  444547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:50:05.444023  444547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:50:05.610160  444547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:51:35.953940  444547 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.343723561s)
	I0819 18:51:35.953984  444547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:51:35.954042  444547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:51:35.958905  444547 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0819 18:51:35.958943  444547 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0819 18:51:35.958954  444547 command_runner.go:130] > Device: 0,22	Inode: 1653        Links: 1
	I0819 18:51:35.958965  444547 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 18:51:35.958973  444547 command_runner.go:130] > Access: 2024-08-19 18:51:35.749660900 +0000
	I0819 18:51:35.958982  444547 command_runner.go:130] > Modify: 2024-08-19 18:51:35.749660900 +0000
	I0819 18:51:35.958993  444547 command_runner.go:130] > Change: 2024-08-19 18:51:35.749660900 +0000
	I0819 18:51:35.958999  444547 command_runner.go:130] >  Birth: -
	I0819 18:51:35.959026  444547 start.go:563] Will wait 60s for crictl version
	I0819 18:51:35.959080  444547 ssh_runner.go:195] Run: which crictl
	I0819 18:51:35.962908  444547 command_runner.go:130] > /usr/bin/crictl
	I0819 18:51:35.963010  444547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:51:35.995379  444547 command_runner.go:130] > Version:  0.1.0
	I0819 18:51:35.995417  444547 command_runner.go:130] > RuntimeName:  cri-o
	I0819 18:51:35.995425  444547 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0819 18:51:35.995433  444547 command_runner.go:130] > RuntimeApiVersion:  v1
	I0819 18:51:35.996527  444547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:51:35.996626  444547 ssh_runner.go:195] Run: crio --version
	I0819 18:51:36.025037  444547 command_runner.go:130] > crio version 1.29.1
	I0819 18:51:36.025067  444547 command_runner.go:130] > Version:        1.29.1
	I0819 18:51:36.025076  444547 command_runner.go:130] > GitCommit:      unknown
	I0819 18:51:36.025082  444547 command_runner.go:130] > GitCommitDate:  unknown
	I0819 18:51:36.025088  444547 command_runner.go:130] > GitTreeState:   clean
	I0819 18:51:36.025097  444547 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 18:51:36.025103  444547 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 18:51:36.025108  444547 command_runner.go:130] > Compiler:       gc
	I0819 18:51:36.025115  444547 command_runner.go:130] > Platform:       linux/amd64
	I0819 18:51:36.025122  444547 command_runner.go:130] > Linkmode:       dynamic
	I0819 18:51:36.025137  444547 command_runner.go:130] > BuildTags:      
	I0819 18:51:36.025142  444547 command_runner.go:130] >   containers_image_ostree_stub
	I0819 18:51:36.025147  444547 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 18:51:36.025151  444547 command_runner.go:130] >   btrfs_noversion
	I0819 18:51:36.025156  444547 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 18:51:36.025161  444547 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 18:51:36.025169  444547 command_runner.go:130] >   seccomp
	I0819 18:51:36.025175  444547 command_runner.go:130] > LDFlags:          unknown
	I0819 18:51:36.025182  444547 command_runner.go:130] > SeccompEnabled:   true
	I0819 18:51:36.025187  444547 command_runner.go:130] > AppArmorEnabled:  false
	I0819 18:51:36.025256  444547 ssh_runner.go:195] Run: crio --version
	I0819 18:51:36.052216  444547 command_runner.go:130] > crio version 1.29.1
	I0819 18:51:36.052240  444547 command_runner.go:130] > Version:        1.29.1
	I0819 18:51:36.052247  444547 command_runner.go:130] > GitCommit:      unknown
	I0819 18:51:36.052252  444547 command_runner.go:130] > GitCommitDate:  unknown
	I0819 18:51:36.052256  444547 command_runner.go:130] > GitTreeState:   clean
	I0819 18:51:36.052261  444547 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 18:51:36.052266  444547 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 18:51:36.052270  444547 command_runner.go:130] > Compiler:       gc
	I0819 18:51:36.052282  444547 command_runner.go:130] > Platform:       linux/amd64
	I0819 18:51:36.052288  444547 command_runner.go:130] > Linkmode:       dynamic
	I0819 18:51:36.052294  444547 command_runner.go:130] > BuildTags:      
	I0819 18:51:36.052301  444547 command_runner.go:130] >   containers_image_ostree_stub
	I0819 18:51:36.052307  444547 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 18:51:36.052317  444547 command_runner.go:130] >   btrfs_noversion
	I0819 18:51:36.052324  444547 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 18:51:36.052333  444547 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 18:51:36.052338  444547 command_runner.go:130] >   seccomp
	I0819 18:51:36.052345  444547 command_runner.go:130] > LDFlags:          unknown
	I0819 18:51:36.052350  444547 command_runner.go:130] > SeccompEnabled:   true
	I0819 18:51:36.052356  444547 command_runner.go:130] > AppArmorEnabled:  false
	I0819 18:51:36.055292  444547 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:51:36.056598  444547 main.go:141] libmachine: (functional-124593) Calling .GetIP
	I0819 18:51:36.059532  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:51:36.059864  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:51:36.059895  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:51:36.060137  444547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:51:36.064416  444547 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0819 18:51:36.064570  444547 kubeadm.go:883] updating cluster {Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:51:36.064698  444547 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:51:36.064782  444547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:51:36.110239  444547 command_runner.go:130] > {
	I0819 18:51:36.110264  444547 command_runner.go:130] >   "images": [
	I0819 18:51:36.110268  444547 command_runner.go:130] >     {
	I0819 18:51:36.110277  444547 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 18:51:36.110281  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110287  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 18:51:36.110290  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110294  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110303  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 18:51:36.110310  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 18:51:36.110314  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110319  444547 command_runner.go:130] >       "size": "87165492",
	I0819 18:51:36.110324  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.110330  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110343  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110350  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110359  444547 command_runner.go:130] >     },
	I0819 18:51:36.110364  444547 command_runner.go:130] >     {
	I0819 18:51:36.110373  444547 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 18:51:36.110391  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110399  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 18:51:36.110402  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110406  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110414  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 18:51:36.110425  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 18:51:36.110432  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110443  444547 command_runner.go:130] >       "size": "31470524",
	I0819 18:51:36.110453  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.110461  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110468  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110477  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110483  444547 command_runner.go:130] >     },
	I0819 18:51:36.110502  444547 command_runner.go:130] >     {
	I0819 18:51:36.110513  444547 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 18:51:36.110522  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110533  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 18:51:36.110539  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110549  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110563  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 18:51:36.110577  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 18:51:36.110586  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110594  444547 command_runner.go:130] >       "size": "61245718",
	I0819 18:51:36.110601  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.110611  444547 command_runner.go:130] >       "username": "nonroot",
	I0819 18:51:36.110621  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110631  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110637  444547 command_runner.go:130] >     },
	I0819 18:51:36.110645  444547 command_runner.go:130] >     {
	I0819 18:51:36.110658  444547 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 18:51:36.110668  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110677  444547 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 18:51:36.110684  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110701  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110715  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 18:51:36.110733  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 18:51:36.110742  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110753  444547 command_runner.go:130] >       "size": "149009664",
	I0819 18:51:36.110760  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.110764  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.110770  444547 command_runner.go:130] >       },
	I0819 18:51:36.110777  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110787  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110797  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110805  444547 command_runner.go:130] >     },
	I0819 18:51:36.110814  444547 command_runner.go:130] >     {
	I0819 18:51:36.110823  444547 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 18:51:36.110832  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110842  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 18:51:36.110849  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110853  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110868  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 18:51:36.110884  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 18:51:36.110893  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110901  444547 command_runner.go:130] >       "size": "95233506",
	I0819 18:51:36.110909  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.110918  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.110927  444547 command_runner.go:130] >       },
	I0819 18:51:36.110934  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110939  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110947  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110956  444547 command_runner.go:130] >     },
	I0819 18:51:36.110965  444547 command_runner.go:130] >     {
	I0819 18:51:36.110978  444547 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 18:51:36.110988  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110999  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 18:51:36.111007  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111013  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111025  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 18:51:36.111040  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 18:51:36.111049  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111060  444547 command_runner.go:130] >       "size": "89437512",
	I0819 18:51:36.111070  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.111080  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.111089  444547 command_runner.go:130] >       },
	I0819 18:51:36.111096  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111104  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111114  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.111122  444547 command_runner.go:130] >     },
	I0819 18:51:36.111128  444547 command_runner.go:130] >     {
	I0819 18:51:36.111140  444547 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 18:51:36.111148  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.111154  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 18:51:36.111163  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111170  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111185  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 18:51:36.111199  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 18:51:36.111206  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111213  444547 command_runner.go:130] >       "size": "92728217",
	I0819 18:51:36.111223  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.111230  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111239  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111246  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.111254  444547 command_runner.go:130] >     },
	I0819 18:51:36.111267  444547 command_runner.go:130] >     {
	I0819 18:51:36.111281  444547 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 18:51:36.111290  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.111299  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 18:51:36.111307  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111313  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111333  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 18:51:36.111345  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 18:51:36.111351  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111355  444547 command_runner.go:130] >       "size": "68420936",
	I0819 18:51:36.111361  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.111365  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.111370  444547 command_runner.go:130] >       },
	I0819 18:51:36.111374  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111381  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111385  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.111389  444547 command_runner.go:130] >     },
	I0819 18:51:36.111393  444547 command_runner.go:130] >     {
	I0819 18:51:36.111399  444547 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 18:51:36.111405  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.111410  444547 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 18:51:36.111415  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111420  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111429  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 18:51:36.111438  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 18:51:36.111442  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111448  444547 command_runner.go:130] >       "size": "742080",
	I0819 18:51:36.111452  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.111456  444547 command_runner.go:130] >         "value": "65535"
	I0819 18:51:36.111460  444547 command_runner.go:130] >       },
	I0819 18:51:36.111464  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111480  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111486  444547 command_runner.go:130] >       "pinned": true
	I0819 18:51:36.111494  444547 command_runner.go:130] >     }
	I0819 18:51:36.111502  444547 command_runner.go:130] >   ]
	I0819 18:51:36.111507  444547 command_runner.go:130] > }
	I0819 18:51:36.111701  444547 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:51:36.111714  444547 crio.go:433] Images already preloaded, skipping extraction
	I0819 18:51:36.111767  444547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:51:36.143806  444547 command_runner.go:130] > {
	I0819 18:51:36.143831  444547 command_runner.go:130] >   "images": [
	I0819 18:51:36.143835  444547 command_runner.go:130] >     {
	I0819 18:51:36.143843  444547 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 18:51:36.143848  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.143854  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 18:51:36.143857  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143861  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.143870  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 18:51:36.143877  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 18:51:36.143883  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143887  444547 command_runner.go:130] >       "size": "87165492",
	I0819 18:51:36.143891  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.143898  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.143904  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.143909  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.143912  444547 command_runner.go:130] >     },
	I0819 18:51:36.143916  444547 command_runner.go:130] >     {
	I0819 18:51:36.143922  444547 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 18:51:36.143929  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.143934  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 18:51:36.143939  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143943  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.143953  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 18:51:36.143960  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 18:51:36.143967  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143978  444547 command_runner.go:130] >       "size": "31470524",
	I0819 18:51:36.143984  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.143992  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144001  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144007  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144016  444547 command_runner.go:130] >     },
	I0819 18:51:36.144021  444547 command_runner.go:130] >     {
	I0819 18:51:36.144036  444547 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 18:51:36.144043  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144048  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 18:51:36.144054  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144058  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144067  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 18:51:36.144085  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 18:51:36.144093  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144100  444547 command_runner.go:130] >       "size": "61245718",
	I0819 18:51:36.144109  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.144119  444547 command_runner.go:130] >       "username": "nonroot",
	I0819 18:51:36.144126  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144134  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144138  444547 command_runner.go:130] >     },
	I0819 18:51:36.144142  444547 command_runner.go:130] >     {
	I0819 18:51:36.144148  444547 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 18:51:36.144154  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144159  444547 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 18:51:36.144162  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144165  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144172  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 18:51:36.144188  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 18:51:36.144197  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144204  444547 command_runner.go:130] >       "size": "149009664",
	I0819 18:51:36.144213  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144220  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144227  444547 command_runner.go:130] >       },
	I0819 18:51:36.144231  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144237  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144243  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144249  444547 command_runner.go:130] >     },
	I0819 18:51:36.144252  444547 command_runner.go:130] >     {
	I0819 18:51:36.144259  444547 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 18:51:36.144267  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144276  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 18:51:36.144285  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144291  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144305  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 18:51:36.144320  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 18:51:36.144327  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144333  444547 command_runner.go:130] >       "size": "95233506",
	I0819 18:51:36.144337  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144341  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144347  444547 command_runner.go:130] >       },
	I0819 18:51:36.144352  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144358  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144365  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144374  444547 command_runner.go:130] >     },
	I0819 18:51:36.144380  444547 command_runner.go:130] >     {
	I0819 18:51:36.144389  444547 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 18:51:36.144399  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144408  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 18:51:36.144419  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144427  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144435  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 18:51:36.144449  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 18:51:36.144471  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144501  444547 command_runner.go:130] >       "size": "89437512",
	I0819 18:51:36.144507  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144516  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144521  444547 command_runner.go:130] >       },
	I0819 18:51:36.144526  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144532  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144541  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144547  444547 command_runner.go:130] >     },
	I0819 18:51:36.144558  444547 command_runner.go:130] >     {
	I0819 18:51:36.144568  444547 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 18:51:36.144577  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144585  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 18:51:36.144593  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144600  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144611  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 18:51:36.144623  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 18:51:36.144632  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144640  444547 command_runner.go:130] >       "size": "92728217",
	I0819 18:51:36.144649  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.144656  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144663  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144669  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144677  444547 command_runner.go:130] >     },
	I0819 18:51:36.144682  444547 command_runner.go:130] >     {
	I0819 18:51:36.144694  444547 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 18:51:36.144704  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144716  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 18:51:36.144725  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144734  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144755  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 18:51:36.144768  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 18:51:36.144775  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144780  444547 command_runner.go:130] >       "size": "68420936",
	I0819 18:51:36.144789  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144798  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144807  444547 command_runner.go:130] >       },
	I0819 18:51:36.144816  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144826  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144835  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144843  444547 command_runner.go:130] >     },
	I0819 18:51:36.144849  444547 command_runner.go:130] >     {
	I0819 18:51:36.144864  444547 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 18:51:36.144873  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144882  444547 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 18:51:36.144892  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144901  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144912  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 18:51:36.144926  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 18:51:36.144934  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144940  444547 command_runner.go:130] >       "size": "742080",
	I0819 18:51:36.144944  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144950  444547 command_runner.go:130] >         "value": "65535"
	I0819 18:51:36.144958  444547 command_runner.go:130] >       },
	I0819 18:51:36.144968  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144979  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144988  444547 command_runner.go:130] >       "pinned": true
	I0819 18:51:36.144995  444547 command_runner.go:130] >     }
	I0819 18:51:36.145001  444547 command_runner.go:130] >   ]
	I0819 18:51:36.145008  444547 command_runner.go:130] > }
	I0819 18:51:36.145182  444547 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:51:36.145198  444547 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:51:36.145207  444547 kubeadm.go:934] updating node { 192.168.39.22 8441 v1.31.0 crio true true} ...
	I0819 18:51:36.145347  444547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-124593 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:51:36.145440  444547 ssh_runner.go:195] Run: crio config
	I0819 18:51:36.185689  444547 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0819 18:51:36.185722  444547 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0819 18:51:36.185733  444547 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0819 18:51:36.185738  444547 command_runner.go:130] > #
	I0819 18:51:36.185763  444547 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0819 18:51:36.185772  444547 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0819 18:51:36.185782  444547 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0819 18:51:36.185794  444547 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0819 18:51:36.185800  444547 command_runner.go:130] > # reload'.
	I0819 18:51:36.185810  444547 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0819 18:51:36.185824  444547 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0819 18:51:36.185834  444547 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0819 18:51:36.185851  444547 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0819 18:51:36.185857  444547 command_runner.go:130] > [crio]
	I0819 18:51:36.185867  444547 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0819 18:51:36.185878  444547 command_runner.go:130] > # containers images, in this directory.
	I0819 18:51:36.185886  444547 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0819 18:51:36.185906  444547 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0819 18:51:36.185916  444547 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0819 18:51:36.185927  444547 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0819 18:51:36.185937  444547 command_runner.go:130] > # imagestore = ""
	I0819 18:51:36.185947  444547 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0819 18:51:36.185960  444547 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0819 18:51:36.185968  444547 command_runner.go:130] > storage_driver = "overlay"
	I0819 18:51:36.185979  444547 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0819 18:51:36.185990  444547 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0819 18:51:36.186001  444547 command_runner.go:130] > storage_option = [
	I0819 18:51:36.186010  444547 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0819 18:51:36.186018  444547 command_runner.go:130] > ]
	I0819 18:51:36.186029  444547 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0819 18:51:36.186041  444547 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0819 18:51:36.186052  444547 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0819 18:51:36.186068  444547 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0819 18:51:36.186082  444547 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0819 18:51:36.186092  444547 command_runner.go:130] > # always happen on a node reboot
	I0819 18:51:36.186103  444547 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0819 18:51:36.186124  444547 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0819 18:51:36.186136  444547 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0819 18:51:36.186147  444547 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0819 18:51:36.186155  444547 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0819 18:51:36.186168  444547 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0819 18:51:36.186183  444547 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0819 18:51:36.186193  444547 command_runner.go:130] > # internal_wipe = true
	I0819 18:51:36.186206  444547 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0819 18:51:36.186217  444547 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0819 18:51:36.186227  444547 command_runner.go:130] > # internal_repair = false
	I0819 18:51:36.186235  444547 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0819 18:51:36.186247  444547 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0819 18:51:36.186256  444547 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0819 18:51:36.186268  444547 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0819 18:51:36.186303  444547 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0819 18:51:36.186317  444547 command_runner.go:130] > [crio.api]
	I0819 18:51:36.186326  444547 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0819 18:51:36.186333  444547 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0819 18:51:36.186342  444547 command_runner.go:130] > # IP address on which the stream server will listen.
	I0819 18:51:36.186353  444547 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0819 18:51:36.186363  444547 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0819 18:51:36.186374  444547 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0819 18:51:36.186386  444547 command_runner.go:130] > # stream_port = "0"
	I0819 18:51:36.186395  444547 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0819 18:51:36.186402  444547 command_runner.go:130] > # stream_enable_tls = false
	I0819 18:51:36.186409  444547 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0819 18:51:36.186418  444547 command_runner.go:130] > # stream_idle_timeout = ""
	I0819 18:51:36.186429  444547 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0819 18:51:36.186441  444547 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0819 18:51:36.186450  444547 command_runner.go:130] > # minutes.
	I0819 18:51:36.186457  444547 command_runner.go:130] > # stream_tls_cert = ""
	I0819 18:51:36.186468  444547 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0819 18:51:36.186486  444547 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0819 18:51:36.186498  444547 command_runner.go:130] > # stream_tls_key = ""
	I0819 18:51:36.186511  444547 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0819 18:51:36.186523  444547 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0819 18:51:36.186547  444547 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0819 18:51:36.186556  444547 command_runner.go:130] > # stream_tls_ca = ""
	I0819 18:51:36.186567  444547 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 18:51:36.186578  444547 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0819 18:51:36.186589  444547 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 18:51:36.186600  444547 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0819 18:51:36.186610  444547 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0819 18:51:36.186622  444547 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0819 18:51:36.186629  444547 command_runner.go:130] > [crio.runtime]
	I0819 18:51:36.186639  444547 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0819 18:51:36.186650  444547 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0819 18:51:36.186659  444547 command_runner.go:130] > # "nofile=1024:2048"
	I0819 18:51:36.186670  444547 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0819 18:51:36.186674  444547 command_runner.go:130] > # default_ulimits = [
	I0819 18:51:36.186678  444547 command_runner.go:130] > # ]
	I0819 18:51:36.186687  444547 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0819 18:51:36.186701  444547 command_runner.go:130] > # no_pivot = false
	I0819 18:51:36.186714  444547 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0819 18:51:36.186727  444547 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0819 18:51:36.186738  444547 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0819 18:51:36.186747  444547 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0819 18:51:36.186758  444547 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0819 18:51:36.186773  444547 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 18:51:36.186783  444547 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0819 18:51:36.186791  444547 command_runner.go:130] > # Cgroup setting for conmon
	I0819 18:51:36.186805  444547 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0819 18:51:36.186814  444547 command_runner.go:130] > conmon_cgroup = "pod"
	I0819 18:51:36.186824  444547 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0819 18:51:36.186834  444547 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0819 18:51:36.186845  444547 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 18:51:36.186855  444547 command_runner.go:130] > conmon_env = [
	I0819 18:51:36.186864  444547 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 18:51:36.186872  444547 command_runner.go:130] > ]
	I0819 18:51:36.186881  444547 command_runner.go:130] > # Additional environment variables to set for all the
	I0819 18:51:36.186891  444547 command_runner.go:130] > # containers. These are overridden if set in the
	I0819 18:51:36.186902  444547 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0819 18:51:36.186911  444547 command_runner.go:130] > # default_env = [
	I0819 18:51:36.186916  444547 command_runner.go:130] > # ]
	I0819 18:51:36.186957  444547 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0819 18:51:36.186977  444547 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0819 18:51:36.186983  444547 command_runner.go:130] > # selinux = false
	I0819 18:51:36.186992  444547 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0819 18:51:36.187004  444547 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0819 18:51:36.187019  444547 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0819 18:51:36.187029  444547 command_runner.go:130] > # seccomp_profile = ""
	I0819 18:51:36.187038  444547 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0819 18:51:36.187049  444547 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0819 18:51:36.187059  444547 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0819 18:51:36.187069  444547 command_runner.go:130] > # which might increase security.
	I0819 18:51:36.187074  444547 command_runner.go:130] > # This option is currently deprecated,
	I0819 18:51:36.187084  444547 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0819 18:51:36.187095  444547 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0819 18:51:36.187107  444547 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0819 18:51:36.187127  444547 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0819 18:51:36.187139  444547 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0819 18:51:36.187152  444547 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0819 18:51:36.187160  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.187167  444547 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0819 18:51:36.187178  444547 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0819 18:51:36.187188  444547 command_runner.go:130] > # the cgroup blockio controller.
	I0819 18:51:36.187200  444547 command_runner.go:130] > # blockio_config_file = ""
	I0819 18:51:36.187214  444547 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0819 18:51:36.187224  444547 command_runner.go:130] > # blockio parameters.
	I0819 18:51:36.187231  444547 command_runner.go:130] > # blockio_reload = false
	I0819 18:51:36.187241  444547 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0819 18:51:36.187250  444547 command_runner.go:130] > # irqbalance daemon.
	I0819 18:51:36.187259  444547 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0819 18:51:36.187271  444547 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0819 18:51:36.187285  444547 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0819 18:51:36.187297  444547 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0819 18:51:36.187309  444547 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0819 18:51:36.187322  444547 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0819 18:51:36.187332  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.187344  444547 command_runner.go:130] > # rdt_config_file = ""
	I0819 18:51:36.187353  444547 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0819 18:51:36.187363  444547 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0819 18:51:36.187390  444547 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0819 18:51:36.187400  444547 command_runner.go:130] > # separate_pull_cgroup = ""
	I0819 18:51:36.187410  444547 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0819 18:51:36.187425  444547 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0819 18:51:36.187435  444547 command_runner.go:130] > # will be added.
	I0819 18:51:36.187442  444547 command_runner.go:130] > # default_capabilities = [
	I0819 18:51:36.187451  444547 command_runner.go:130] > # 	"CHOWN",
	I0819 18:51:36.187458  444547 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0819 18:51:36.187466  444547 command_runner.go:130] > # 	"FSETID",
	I0819 18:51:36.187476  444547 command_runner.go:130] > # 	"FOWNER",
	I0819 18:51:36.187484  444547 command_runner.go:130] > # 	"SETGID",
	I0819 18:51:36.187490  444547 command_runner.go:130] > # 	"SETUID",
	I0819 18:51:36.187499  444547 command_runner.go:130] > # 	"SETPCAP",
	I0819 18:51:36.187506  444547 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0819 18:51:36.187516  444547 command_runner.go:130] > # 	"KILL",
	I0819 18:51:36.187521  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187536  444547 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0819 18:51:36.187549  444547 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0819 18:51:36.187564  444547 command_runner.go:130] > # add_inheritable_capabilities = false
	I0819 18:51:36.187577  444547 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0819 18:51:36.187588  444547 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 18:51:36.187595  444547 command_runner.go:130] > default_sysctls = [
	I0819 18:51:36.187599  444547 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0819 18:51:36.187602  444547 command_runner.go:130] > ]
	I0819 18:51:36.187607  444547 command_runner.go:130] > # List of devices on the host that a
	I0819 18:51:36.187613  444547 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0819 18:51:36.187617  444547 command_runner.go:130] > # allowed_devices = [
	I0819 18:51:36.187621  444547 command_runner.go:130] > # 	"/dev/fuse",
	I0819 18:51:36.187626  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187637  444547 command_runner.go:130] > # List of additional devices. specified as
	I0819 18:51:36.187650  444547 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0819 18:51:36.187663  444547 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0819 18:51:36.187675  444547 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 18:51:36.187685  444547 command_runner.go:130] > # additional_devices = [
	I0819 18:51:36.187690  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187699  444547 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0819 18:51:36.187703  444547 command_runner.go:130] > # cdi_spec_dirs = [
	I0819 18:51:36.187707  444547 command_runner.go:130] > # 	"/etc/cdi",
	I0819 18:51:36.187711  444547 command_runner.go:130] > # 	"/var/run/cdi",
	I0819 18:51:36.187715  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187721  444547 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0819 18:51:36.187729  444547 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0819 18:51:36.187735  444547 command_runner.go:130] > # Defaults to false.
	I0819 18:51:36.187739  444547 command_runner.go:130] > # device_ownership_from_security_context = false
	I0819 18:51:36.187746  444547 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0819 18:51:36.187753  444547 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0819 18:51:36.187756  444547 command_runner.go:130] > # hooks_dir = [
	I0819 18:51:36.187761  444547 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0819 18:51:36.187766  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187775  444547 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0819 18:51:36.187788  444547 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0819 18:51:36.187800  444547 command_runner.go:130] > # its default mounts from the following two files:
	I0819 18:51:36.187808  444547 command_runner.go:130] > #
	I0819 18:51:36.187819  444547 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0819 18:51:36.187831  444547 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0819 18:51:36.187841  444547 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0819 18:51:36.187846  444547 command_runner.go:130] > #
	I0819 18:51:36.187856  444547 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0819 18:51:36.187870  444547 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0819 18:51:36.187887  444547 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0819 18:51:36.187899  444547 command_runner.go:130] > #      only add mounts it finds in this file.
	I0819 18:51:36.187907  444547 command_runner.go:130] > #
	I0819 18:51:36.187915  444547 command_runner.go:130] > # default_mounts_file = ""
	I0819 18:51:36.187927  444547 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0819 18:51:36.187940  444547 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0819 18:51:36.187948  444547 command_runner.go:130] > pids_limit = 1024
	I0819 18:51:36.187961  444547 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0819 18:51:36.187976  444547 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0819 18:51:36.187989  444547 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0819 18:51:36.188004  444547 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0819 18:51:36.188020  444547 command_runner.go:130] > # log_size_max = -1
	I0819 18:51:36.188034  444547 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0819 18:51:36.188043  444547 command_runner.go:130] > # log_to_journald = false
	I0819 18:51:36.188053  444547 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0819 18:51:36.188064  444547 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0819 18:51:36.188076  444547 command_runner.go:130] > # Path to directory for container attach sockets.
	I0819 18:51:36.188084  444547 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0819 18:51:36.188095  444547 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0819 18:51:36.188103  444547 command_runner.go:130] > # bind_mount_prefix = ""
	I0819 18:51:36.188113  444547 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0819 18:51:36.188123  444547 command_runner.go:130] > # read_only = false
	I0819 18:51:36.188133  444547 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0819 18:51:36.188144  444547 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0819 18:51:36.188151  444547 command_runner.go:130] > # live configuration reload.
	I0819 18:51:36.188161  444547 command_runner.go:130] > # log_level = "info"
	I0819 18:51:36.188171  444547 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0819 18:51:36.188182  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.188190  444547 command_runner.go:130] > # log_filter = ""
	I0819 18:51:36.188199  444547 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0819 18:51:36.188216  444547 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0819 18:51:36.188225  444547 command_runner.go:130] > # separated by comma.
	I0819 18:51:36.188237  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188247  444547 command_runner.go:130] > # uid_mappings = ""
	I0819 18:51:36.188257  444547 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0819 18:51:36.188269  444547 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0819 18:51:36.188278  444547 command_runner.go:130] > # separated by comma.
	I0819 18:51:36.188293  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188303  444547 command_runner.go:130] > # gid_mappings = ""
	I0819 18:51:36.188313  444547 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0819 18:51:36.188325  444547 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 18:51:36.188337  444547 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 18:51:36.188351  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188359  444547 command_runner.go:130] > # minimum_mappable_uid = -1
	I0819 18:51:36.188366  444547 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0819 18:51:36.188375  444547 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 18:51:36.188381  444547 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 18:51:36.188390  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188394  444547 command_runner.go:130] > # minimum_mappable_gid = -1
	I0819 18:51:36.188402  444547 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0819 18:51:36.188408  444547 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0819 18:51:36.188415  444547 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0819 18:51:36.188419  444547 command_runner.go:130] > # ctr_stop_timeout = 30
	I0819 18:51:36.188424  444547 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0819 18:51:36.188430  444547 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0819 18:51:36.188437  444547 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0819 18:51:36.188441  444547 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0819 18:51:36.188445  444547 command_runner.go:130] > drop_infra_ctr = false
	I0819 18:51:36.188451  444547 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0819 18:51:36.188458  444547 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0819 18:51:36.188465  444547 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0819 18:51:36.188471  444547 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0819 18:51:36.188482  444547 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0819 18:51:36.188489  444547 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0819 18:51:36.188495  444547 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0819 18:51:36.188502  444547 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0819 18:51:36.188506  444547 command_runner.go:130] > # shared_cpuset = ""
	I0819 18:51:36.188514  444547 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0819 18:51:36.188519  444547 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0819 18:51:36.188524  444547 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0819 18:51:36.188531  444547 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0819 18:51:36.188537  444547 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0819 18:51:36.188549  444547 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0819 18:51:36.188561  444547 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0819 18:51:36.188571  444547 command_runner.go:130] > # enable_criu_support = false
	I0819 18:51:36.188579  444547 command_runner.go:130] > # Enable/disable the generation of the container,
	I0819 18:51:36.188591  444547 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0819 18:51:36.188598  444547 command_runner.go:130] > # enable_pod_events = false
	I0819 18:51:36.188604  444547 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 18:51:36.188613  444547 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 18:51:36.188620  444547 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0819 18:51:36.188626  444547 command_runner.go:130] > # default_runtime = "runc"
	I0819 18:51:36.188631  444547 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0819 18:51:36.188638  444547 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0819 18:51:36.188649  444547 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0819 18:51:36.188656  444547 command_runner.go:130] > # creation as a file is not desired either.
	I0819 18:51:36.188664  444547 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0819 18:51:36.188671  444547 command_runner.go:130] > # the hostname is being managed dynamically.
	I0819 18:51:36.188675  444547 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0819 18:51:36.188681  444547 command_runner.go:130] > # ]
	I0819 18:51:36.188686  444547 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0819 18:51:36.188694  444547 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0819 18:51:36.188700  444547 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0819 18:51:36.188708  444547 command_runner.go:130] > # Each entry in the table should follow the format:
	I0819 18:51:36.188711  444547 command_runner.go:130] > #
	I0819 18:51:36.188716  444547 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0819 18:51:36.188720  444547 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0819 18:51:36.188744  444547 command_runner.go:130] > # runtime_type = "oci"
	I0819 18:51:36.188752  444547 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0819 18:51:36.188757  444547 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0819 18:51:36.188763  444547 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0819 18:51:36.188768  444547 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0819 18:51:36.188774  444547 command_runner.go:130] > # monitor_env = []
	I0819 18:51:36.188778  444547 command_runner.go:130] > # privileged_without_host_devices = false
	I0819 18:51:36.188782  444547 command_runner.go:130] > # allowed_annotations = []
	I0819 18:51:36.188790  444547 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0819 18:51:36.188795  444547 command_runner.go:130] > # Where:
	I0819 18:51:36.188800  444547 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0819 18:51:36.188806  444547 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0819 18:51:36.188813  444547 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0819 18:51:36.188822  444547 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0819 18:51:36.188828  444547 command_runner.go:130] > #   in $PATH.
	I0819 18:51:36.188834  444547 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0819 18:51:36.188839  444547 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0819 18:51:36.188845  444547 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0819 18:51:36.188851  444547 command_runner.go:130] > #   state.
	I0819 18:51:36.188858  444547 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0819 18:51:36.188865  444547 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0819 18:51:36.188871  444547 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0819 18:51:36.188879  444547 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0819 18:51:36.188885  444547 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0819 18:51:36.188893  444547 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0819 18:51:36.188898  444547 command_runner.go:130] > #   The currently recognized values are:
	I0819 18:51:36.188904  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0819 18:51:36.188911  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0819 18:51:36.188917  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0819 18:51:36.188925  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0819 18:51:36.188934  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0819 18:51:36.188940  444547 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0819 18:51:36.188948  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0819 18:51:36.188954  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0819 18:51:36.188962  444547 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0819 18:51:36.188968  444547 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0819 18:51:36.188972  444547 command_runner.go:130] > #   deprecated option "conmon".
	I0819 18:51:36.188979  444547 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0819 18:51:36.188985  444547 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0819 18:51:36.188992  444547 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0819 18:51:36.188998  444547 command_runner.go:130] > #   should be moved to the container's cgroup
	I0819 18:51:36.189006  444547 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0819 18:51:36.189013  444547 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0819 18:51:36.189019  444547 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0819 18:51:36.189026  444547 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0819 18:51:36.189031  444547 command_runner.go:130] > #
	I0819 18:51:36.189041  444547 command_runner.go:130] > # Using the seccomp notifier feature:
	I0819 18:51:36.189044  444547 command_runner.go:130] > #
	I0819 18:51:36.189051  444547 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0819 18:51:36.189058  444547 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0819 18:51:36.189062  444547 command_runner.go:130] > #
	I0819 18:51:36.189070  444547 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0819 18:51:36.189078  444547 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0819 18:51:36.189082  444547 command_runner.go:130] > #
	I0819 18:51:36.189089  444547 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0819 18:51:36.189095  444547 command_runner.go:130] > # feature.
	I0819 18:51:36.189100  444547 command_runner.go:130] > #
	I0819 18:51:36.189106  444547 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0819 18:51:36.189114  444547 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0819 18:51:36.189120  444547 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0819 18:51:36.189127  444547 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0819 18:51:36.189146  444547 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0819 18:51:36.189154  444547 command_runner.go:130] > #
	I0819 18:51:36.189163  444547 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0819 18:51:36.189174  444547 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0819 18:51:36.189178  444547 command_runner.go:130] > #
	I0819 18:51:36.189184  444547 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0819 18:51:36.189192  444547 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0819 18:51:36.189195  444547 command_runner.go:130] > #
	I0819 18:51:36.189203  444547 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0819 18:51:36.189209  444547 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0819 18:51:36.189214  444547 command_runner.go:130] > # limitation.
	I0819 18:51:36.189220  444547 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0819 18:51:36.189226  444547 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0819 18:51:36.189230  444547 command_runner.go:130] > runtime_type = "oci"
	I0819 18:51:36.189234  444547 command_runner.go:130] > runtime_root = "/run/runc"
	I0819 18:51:36.189240  444547 command_runner.go:130] > runtime_config_path = ""
	I0819 18:51:36.189244  444547 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0819 18:51:36.189248  444547 command_runner.go:130] > monitor_cgroup = "pod"
	I0819 18:51:36.189252  444547 command_runner.go:130] > monitor_exec_cgroup = ""
	I0819 18:51:36.189256  444547 command_runner.go:130] > monitor_env = [
	I0819 18:51:36.189261  444547 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 18:51:36.189266  444547 command_runner.go:130] > ]
	I0819 18:51:36.189270  444547 command_runner.go:130] > privileged_without_host_devices = false
	I0819 18:51:36.189278  444547 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0819 18:51:36.189283  444547 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0819 18:51:36.189291  444547 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0819 18:51:36.189302  444547 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0819 18:51:36.189311  444547 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0819 18:51:36.189317  444547 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0819 18:51:36.189328  444547 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0819 18:51:36.189339  444547 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0819 18:51:36.189346  444547 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0819 18:51:36.189353  444547 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0819 18:51:36.189358  444547 command_runner.go:130] > # Example:
	I0819 18:51:36.189363  444547 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0819 18:51:36.189370  444547 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0819 18:51:36.189374  444547 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0819 18:51:36.189382  444547 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0819 18:51:36.189386  444547 command_runner.go:130] > # cpuset = 0
	I0819 18:51:36.189393  444547 command_runner.go:130] > # cpushares = "0-1"
	I0819 18:51:36.189396  444547 command_runner.go:130] > # Where:
	I0819 18:51:36.189401  444547 command_runner.go:130] > # The workload name is workload-type.
	I0819 18:51:36.189409  444547 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0819 18:51:36.189415  444547 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0819 18:51:36.189422  444547 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0819 18:51:36.189430  444547 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0819 18:51:36.189437  444547 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0819 18:51:36.189442  444547 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0819 18:51:36.189449  444547 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0819 18:51:36.189455  444547 command_runner.go:130] > # Default value is set to true
	I0819 18:51:36.189459  444547 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0819 18:51:36.189469  444547 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0819 18:51:36.189478  444547 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0819 18:51:36.189484  444547 command_runner.go:130] > # Default value is set to 'false'
	I0819 18:51:36.189489  444547 command_runner.go:130] > # disable_hostport_mapping = false
	I0819 18:51:36.189497  444547 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0819 18:51:36.189500  444547 command_runner.go:130] > #
	I0819 18:51:36.189505  444547 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0819 18:51:36.189513  444547 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0819 18:51:36.189519  444547 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0819 18:51:36.189528  444547 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0819 18:51:36.189536  444547 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0819 18:51:36.189542  444547 command_runner.go:130] > [crio.image]
	I0819 18:51:36.189548  444547 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0819 18:51:36.189554  444547 command_runner.go:130] > # default_transport = "docker://"
	I0819 18:51:36.189560  444547 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0819 18:51:36.189569  444547 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0819 18:51:36.189574  444547 command_runner.go:130] > # global_auth_file = ""
	I0819 18:51:36.189578  444547 command_runner.go:130] > # The image used to instantiate infra containers.
	I0819 18:51:36.189583  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.189590  444547 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0819 18:51:36.189596  444547 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0819 18:51:36.189604  444547 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0819 18:51:36.189609  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.189615  444547 command_runner.go:130] > # pause_image_auth_file = ""
	I0819 18:51:36.189620  444547 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0819 18:51:36.189626  444547 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0819 18:51:36.189632  444547 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0819 18:51:36.189639  444547 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0819 18:51:36.189643  444547 command_runner.go:130] > # pause_command = "/pause"
	I0819 18:51:36.189649  444547 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0819 18:51:36.189655  444547 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0819 18:51:36.189660  444547 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0819 18:51:36.189670  444547 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0819 18:51:36.189678  444547 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0819 18:51:36.189684  444547 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0819 18:51:36.189690  444547 command_runner.go:130] > # pinned_images = [
	I0819 18:51:36.189693  444547 command_runner.go:130] > # ]
	I0819 18:51:36.189700  444547 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0819 18:51:36.189707  444547 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0819 18:51:36.189713  444547 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0819 18:51:36.189721  444547 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0819 18:51:36.189726  444547 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0819 18:51:36.189732  444547 command_runner.go:130] > # signature_policy = ""
	I0819 18:51:36.189737  444547 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0819 18:51:36.189744  444547 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0819 18:51:36.189754  444547 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0819 18:51:36.189762  444547 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0819 18:51:36.189770  444547 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0819 18:51:36.189775  444547 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0819 18:51:36.189781  444547 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0819 18:51:36.189786  444547 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0819 18:51:36.189791  444547 command_runner.go:130] > # changing them here.
	I0819 18:51:36.189795  444547 command_runner.go:130] > # insecure_registries = [
	I0819 18:51:36.189798  444547 command_runner.go:130] > # ]
	I0819 18:51:36.189804  444547 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0819 18:51:36.189808  444547 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0819 18:51:36.189812  444547 command_runner.go:130] > # image_volumes = "mkdir"
	I0819 18:51:36.189816  444547 command_runner.go:130] > # Temporary directory to use for storing big files
	I0819 18:51:36.189820  444547 command_runner.go:130] > # big_files_temporary_dir = ""
	I0819 18:51:36.189826  444547 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0819 18:51:36.189829  444547 command_runner.go:130] > # CNI plugins.
	I0819 18:51:36.189832  444547 command_runner.go:130] > [crio.network]
	I0819 18:51:36.189838  444547 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0819 18:51:36.189842  444547 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0819 18:51:36.189847  444547 command_runner.go:130] > # cni_default_network = ""
	I0819 18:51:36.189851  444547 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0819 18:51:36.189855  444547 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0819 18:51:36.189860  444547 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0819 18:51:36.189863  444547 command_runner.go:130] > # plugin_dirs = [
	I0819 18:51:36.189867  444547 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0819 18:51:36.189870  444547 command_runner.go:130] > # ]
	I0819 18:51:36.189875  444547 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0819 18:51:36.189879  444547 command_runner.go:130] > [crio.metrics]
	I0819 18:51:36.189883  444547 command_runner.go:130] > # Globally enable or disable metrics support.
	I0819 18:51:36.189887  444547 command_runner.go:130] > enable_metrics = true
	I0819 18:51:36.189891  444547 command_runner.go:130] > # Specify enabled metrics collectors.
	I0819 18:51:36.189895  444547 command_runner.go:130] > # Per default all metrics are enabled.
	I0819 18:51:36.189900  444547 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0819 18:51:36.189906  444547 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0819 18:51:36.189911  444547 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0819 18:51:36.189915  444547 command_runner.go:130] > # metrics_collectors = [
	I0819 18:51:36.189918  444547 command_runner.go:130] > # 	"operations",
	I0819 18:51:36.189923  444547 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0819 18:51:36.189927  444547 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0819 18:51:36.189931  444547 command_runner.go:130] > # 	"operations_errors",
	I0819 18:51:36.189935  444547 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0819 18:51:36.189938  444547 command_runner.go:130] > # 	"image_pulls_by_name",
	I0819 18:51:36.189946  444547 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0819 18:51:36.189950  444547 command_runner.go:130] > # 	"image_pulls_failures",
	I0819 18:51:36.189954  444547 command_runner.go:130] > # 	"image_pulls_successes",
	I0819 18:51:36.189958  444547 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0819 18:51:36.189962  444547 command_runner.go:130] > # 	"image_layer_reuse",
	I0819 18:51:36.189970  444547 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0819 18:51:36.189973  444547 command_runner.go:130] > # 	"containers_oom_total",
	I0819 18:51:36.189977  444547 command_runner.go:130] > # 	"containers_oom",
	I0819 18:51:36.189980  444547 command_runner.go:130] > # 	"processes_defunct",
	I0819 18:51:36.189984  444547 command_runner.go:130] > # 	"operations_total",
	I0819 18:51:36.189988  444547 command_runner.go:130] > # 	"operations_latency_seconds",
	I0819 18:51:36.189993  444547 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0819 18:51:36.189997  444547 command_runner.go:130] > # 	"operations_errors_total",
	I0819 18:51:36.190001  444547 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0819 18:51:36.190005  444547 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0819 18:51:36.190009  444547 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0819 18:51:36.190013  444547 command_runner.go:130] > # 	"image_pulls_success_total",
	I0819 18:51:36.190017  444547 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0819 18:51:36.190021  444547 command_runner.go:130] > # 	"containers_oom_count_total",
	I0819 18:51:36.190026  444547 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0819 18:51:36.190033  444547 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0819 18:51:36.190035  444547 command_runner.go:130] > # ]
	I0819 18:51:36.190040  444547 command_runner.go:130] > # The port on which the metrics server will listen.
	I0819 18:51:36.190046  444547 command_runner.go:130] > # metrics_port = 9090
	I0819 18:51:36.190051  444547 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0819 18:51:36.190055  444547 command_runner.go:130] > # metrics_socket = ""
	I0819 18:51:36.190061  444547 command_runner.go:130] > # The certificate for the secure metrics server.
	I0819 18:51:36.190069  444547 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0819 18:51:36.190075  444547 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0819 18:51:36.190082  444547 command_runner.go:130] > # certificate on any modification event.
	I0819 18:51:36.190085  444547 command_runner.go:130] > # metrics_cert = ""
	I0819 18:51:36.190090  444547 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0819 18:51:36.190097  444547 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0819 18:51:36.190101  444547 command_runner.go:130] > # metrics_key = ""
	I0819 18:51:36.190106  444547 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0819 18:51:36.190110  444547 command_runner.go:130] > [crio.tracing]
	I0819 18:51:36.190117  444547 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0819 18:51:36.190124  444547 command_runner.go:130] > # enable_tracing = false
	I0819 18:51:36.190129  444547 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0819 18:51:36.190135  444547 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0819 18:51:36.190142  444547 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0819 18:51:36.190147  444547 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0819 18:51:36.190151  444547 command_runner.go:130] > # CRI-O NRI configuration.
	I0819 18:51:36.190154  444547 command_runner.go:130] > [crio.nri]
	I0819 18:51:36.190158  444547 command_runner.go:130] > # Globally enable or disable NRI.
	I0819 18:51:36.190167  444547 command_runner.go:130] > # enable_nri = false
	I0819 18:51:36.190172  444547 command_runner.go:130] > # NRI socket to listen on.
	I0819 18:51:36.190177  444547 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0819 18:51:36.190183  444547 command_runner.go:130] > # NRI plugin directory to use.
	I0819 18:51:36.190188  444547 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0819 18:51:36.190194  444547 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0819 18:51:36.190198  444547 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0819 18:51:36.190205  444547 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0819 18:51:36.190209  444547 command_runner.go:130] > # nri_disable_connections = false
	I0819 18:51:36.190217  444547 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0819 18:51:36.190221  444547 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0819 18:51:36.190228  444547 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0819 18:51:36.190233  444547 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0819 18:51:36.190238  444547 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0819 18:51:36.190243  444547 command_runner.go:130] > [crio.stats]
	I0819 18:51:36.190249  444547 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0819 18:51:36.190255  444547 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0819 18:51:36.190259  444547 command_runner.go:130] > # stats_collection_period = 0
	I0819 18:51:36.190450  444547 command_runner.go:130] ! time="2024-08-19 18:51:36.161529726Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0819 18:51:36.190501  444547 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0819 18:51:36.190630  444547 cni.go:84] Creating CNI manager for ""
	I0819 18:51:36.190641  444547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:51:36.190651  444547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:51:36.190674  444547 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.22 APIServerPort:8441 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-124593 NodeName:functional-124593 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:51:36.190815  444547 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.22
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-124593"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:51:36.190886  444547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:51:36.200955  444547 command_runner.go:130] > kubeadm
	I0819 18:51:36.200981  444547 command_runner.go:130] > kubectl
	I0819 18:51:36.200986  444547 command_runner.go:130] > kubelet
	I0819 18:51:36.201016  444547 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:51:36.201072  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 18:51:36.211041  444547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0819 18:51:36.228264  444547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:51:36.245722  444547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0819 18:51:36.263018  444547 ssh_runner.go:195] Run: grep 192.168.39.22	control-plane.minikube.internal$ /etc/hosts
	I0819 18:51:36.267130  444547 command_runner.go:130] > 192.168.39.22	control-plane.minikube.internal
	I0819 18:51:36.267229  444547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:51:36.398107  444547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:51:36.412895  444547 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593 for IP: 192.168.39.22
	I0819 18:51:36.412924  444547 certs.go:194] generating shared ca certs ...
	I0819 18:51:36.412943  444547 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:51:36.413154  444547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 18:51:36.413203  444547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 18:51:36.413217  444547 certs.go:256] generating profile certs ...
	I0819 18:51:36.413317  444547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.key
	I0819 18:51:36.413414  444547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.key.aa5a99d1
	I0819 18:51:36.413463  444547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.key
	I0819 18:51:36.413478  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 18:51:36.413496  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 18:51:36.413514  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 18:51:36.413543  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 18:51:36.413558  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 18:51:36.413577  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 18:51:36.413596  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 18:51:36.413612  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 18:51:36.413684  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 18:51:36.413728  444547 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 18:51:36.413741  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 18:51:36.413782  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 18:51:36.413816  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:51:36.413853  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 18:51:36.413906  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 18:51:36.413944  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.413964  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem -> /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.413981  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.414774  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:51:36.439176  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:51:36.463796  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:51:36.490998  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 18:51:36.514746  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 18:51:36.538661  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 18:51:36.562630  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:51:36.586739  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 18:51:36.610889  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:51:36.634562  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 18:51:36.658286  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 18:51:36.681715  444547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:51:36.698451  444547 ssh_runner.go:195] Run: openssl version
	I0819 18:51:36.704220  444547 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0819 18:51:36.704339  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 18:51:36.715389  444547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.720025  444547 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.720080  444547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.720142  444547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.725901  444547 command_runner.go:130] > 51391683
	I0819 18:51:36.726015  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 18:51:36.736206  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 18:51:36.747737  444547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.752558  444547 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.752599  444547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.752642  444547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.758223  444547 command_runner.go:130] > 3ec20f2e
	I0819 18:51:36.758300  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:51:36.767946  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:51:36.779143  444547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.783850  444547 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.783902  444547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.783950  444547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.789800  444547 command_runner.go:130] > b5213941
	I0819 18:51:36.789894  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:51:36.799700  444547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:51:36.804144  444547 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:51:36.804180  444547 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0819 18:51:36.804188  444547 command_runner.go:130] > Device: 253,1	Inode: 4197398     Links: 1
	I0819 18:51:36.804194  444547 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 18:51:36.804201  444547 command_runner.go:130] > Access: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804206  444547 command_runner.go:130] > Modify: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804217  444547 command_runner.go:130] > Change: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804222  444547 command_runner.go:130] >  Birth: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804284  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 18:51:36.810230  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.810339  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 18:51:36.816159  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.816241  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 18:51:36.821909  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.822019  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 18:51:36.827758  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.827847  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 18:51:36.833329  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.833420  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 18:51:36.838995  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.839152  444547 kubeadm.go:392] StartCluster: {Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:51:36.839251  444547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:51:36.839310  444547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:51:36.874453  444547 command_runner.go:130] > e908c3e4c32bde71b6441e3ee7b44b5559f7c82f0a6a46e750222d6420ee5768
	I0819 18:51:36.874803  444547 command_runner.go:130] > 790104913a298ce37e7cba7691f8e4d617c4a8638cb821e244b70555efd66fbf
	I0819 18:51:36.874823  444547 command_runner.go:130] > aa84abbb4c9f6505401955b079afdb2f11c046f6879ee08a957007aaca875b03
	I0819 18:51:36.874834  444547 command_runner.go:130] > d8bfd36fbe96397e082982d91cabf0e8c731c94d88ed6c685f08a18b2da4cc6c
	I0819 18:51:36.874843  444547 command_runner.go:130] > e4040e2a8622453f1ef2fd04190fd96bea2fd3de919c6b06deaa38e83d38cf5b
	I0819 18:51:36.874899  444547 command_runner.go:130] > 8df7ac76fe134c848ddad74ca75f51e5c19afffdbd0712cb3d74333cdc6b91dc
	I0819 18:51:36.875009  444547 command_runner.go:130] > 94e0fe7cb19015bd08c7406824ba63263d67eec22f1240ef0f0eaff258976b4f
	I0819 18:51:36.875035  444547 command_runner.go:130] > 871ee5a26fc4d8fe6f04e66a22c78ba6e6b80077bd1066ab3a3c1acb639de113
	I0819 18:51:36.875045  444547 command_runner.go:130] > 70ce15fbc3bc32cd55767aab9cde75fc9d6f452d9c22c53034e5c2af14442b32
	I0819 18:51:36.875236  444547 command_runner.go:130] > 7703464bfd87d33565890b044096dcd7f50a8b1340ec99f2a88431946c69fc1b
	I0819 18:51:36.875268  444547 command_runner.go:130] > d65fc62d1475e4da38a8c95d6b13d520659291d23ac972a2d197d382a7fa6027
	I0819 18:51:36.875360  444547 command_runner.go:130] > d89c8be1ccfda0fbaa81ee519dc2e56af6349b6060b1afd9b3ac512310edc348
	I0819 18:51:36.875408  444547 command_runner.go:130] > e38111b143b726d6b06442db0379d3ecea609946581cb76904945ed68a359c74
	I0819 18:51:36.876958  444547 cri.go:89] found id: "e908c3e4c32bde71b6441e3ee7b44b5559f7c82f0a6a46e750222d6420ee5768"
	I0819 18:51:36.876978  444547 cri.go:89] found id: "790104913a298ce37e7cba7691f8e4d617c4a8638cb821e244b70555efd66fbf"
	I0819 18:51:36.876984  444547 cri.go:89] found id: "aa84abbb4c9f6505401955b079afdb2f11c046f6879ee08a957007aaca875b03"
	I0819 18:51:36.876989  444547 cri.go:89] found id: "d8bfd36fbe96397e082982d91cabf0e8c731c94d88ed6c685f08a18b2da4cc6c"
	I0819 18:51:36.876993  444547 cri.go:89] found id: "e4040e2a8622453f1ef2fd04190fd96bea2fd3de919c6b06deaa38e83d38cf5b"
	I0819 18:51:36.876998  444547 cri.go:89] found id: "8df7ac76fe134c848ddad74ca75f51e5c19afffdbd0712cb3d74333cdc6b91dc"
	I0819 18:51:36.877002  444547 cri.go:89] found id: "94e0fe7cb19015bd08c7406824ba63263d67eec22f1240ef0f0eaff258976b4f"
	I0819 18:51:36.877006  444547 cri.go:89] found id: "871ee5a26fc4d8fe6f04e66a22c78ba6e6b80077bd1066ab3a3c1acb639de113"
	I0819 18:51:36.877010  444547 cri.go:89] found id: "70ce15fbc3bc32cd55767aab9cde75fc9d6f452d9c22c53034e5c2af14442b32"
	I0819 18:51:36.877024  444547 cri.go:89] found id: "7703464bfd87d33565890b044096dcd7f50a8b1340ec99f2a88431946c69fc1b"
	I0819 18:51:36.877032  444547 cri.go:89] found id: "d65fc62d1475e4da38a8c95d6b13d520659291d23ac972a2d197d382a7fa6027"
	I0819 18:51:36.877036  444547 cri.go:89] found id: "d89c8be1ccfda0fbaa81ee519dc2e56af6349b6060b1afd9b3ac512310edc348"
	I0819 18:51:36.877040  444547 cri.go:89] found id: "e38111b143b726d6b06442db0379d3ecea609946581cb76904945ed68a359c74"
	I0819 18:51:36.877044  444547 cri.go:89] found id: ""
	I0819 18:51:36.877087  444547 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-linux-amd64 start -p functional-124593 --alsologtostderr -v=8": exit status 109
functional_test.go:663: soft start took 13m52.792410514s for "functional-124593" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-124593 -n functional-124593
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-124593 -n functional-124593: exit status 2 (239.179863ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 logs -n 25
helpers_test.go:252: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ip      | addons-966657 ip               | addons-966657     | jenkins | v1.33.1 | 19 Aug 24 18:42 UTC | 19 Aug 24 18:42 UTC |
	| addons  | addons-966657 addons disable   | addons-966657     | jenkins | v1.33.1 | 19 Aug 24 18:42 UTC | 19 Aug 24 18:42 UTC |
	|         | ingress-dns --alsologtostderr  |                   |         |         |                     |                     |
	|         | -v=1                           |                   |         |         |                     |                     |
	| addons  | addons-966657 addons disable   | addons-966657     | jenkins | v1.33.1 | 19 Aug 24 18:42 UTC | 19 Aug 24 18:43 UTC |
	|         | ingress --alsologtostderr -v=1 |                   |         |         |                     |                     |
	| addons  | addons-966657 addons           | addons-966657     | jenkins | v1.33.1 | 19 Aug 24 18:45 UTC | 19 Aug 24 18:45 UTC |
	|         | disable metrics-server         |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                   |         |         |                     |                     |
	| stop    | -p addons-966657               | addons-966657     | jenkins | v1.33.1 | 19 Aug 24 18:45 UTC |                     |
	| addons  | enable dashboard -p            | addons-966657     | jenkins | v1.33.1 | 19 Aug 24 18:47 UTC |                     |
	|         | addons-966657                  |                   |         |         |                     |                     |
	| addons  | disable dashboard -p           | addons-966657     | jenkins | v1.33.1 | 19 Aug 24 18:47 UTC |                     |
	|         | addons-966657                  |                   |         |         |                     |                     |
	| addons  | disable gvisor -p              | addons-966657     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC |                     |
	|         | addons-966657                  |                   |         |         |                     |                     |
	| delete  | -p addons-966657               | addons-966657     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	| start   | -p nospam-212543 -n=1          | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | --memory=2250 --wait=false     |                   |         |         |                     |                     |
	|         | --log_dir=/tmp/nospam-212543   |                   |         |         |                     |                     |
	|         | --driver=kvm2                  |                   |         |         |                     |                     |
	|         | --container-runtime=crio       |                   |         |         |                     |                     |
	| start   | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC |                     |
	|         | /tmp/nospam-212543 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| start   | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC |                     |
	|         | /tmp/nospam-212543 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| start   | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC |                     |
	|         | /tmp/nospam-212543 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| pause   | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 pause       |                   |         |         |                     |                     |
	| pause   | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 pause       |                   |         |         |                     |                     |
	| pause   | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 pause       |                   |         |         |                     |                     |
	| unpause | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 unpause     |                   |         |         |                     |                     |
	| unpause | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 unpause     |                   |         |         |                     |                     |
	| unpause | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 unpause     |                   |         |         |                     |                     |
	| stop    | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 stop        |                   |         |         |                     |                     |
	| stop    | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:49 UTC |
	|         | /tmp/nospam-212543 stop        |                   |         |         |                     |                     |
	| stop    | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:49 UTC | 19 Aug 24 18:49 UTC |
	|         | /tmp/nospam-212543 stop        |                   |         |         |                     |                     |
	| delete  | -p nospam-212543               | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:49 UTC | 19 Aug 24 18:49 UTC |
	| start   | -p functional-124593           | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 18:49 UTC | 19 Aug 24 18:49 UTC |
	|         | --memory=4000                  |                   |         |         |                     |                     |
	|         | --apiserver-port=8441          |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                   |         |         |                     |                     |
	|         | --container-runtime=crio       |                   |         |         |                     |                     |
	| start   | -p functional-124593           | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 18:49 UTC |                     |
	|         | --alsologtostderr -v=8         |                   |         |         |                     |                     |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:49:56
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:49:56.790328  444547 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:49:56.790453  444547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:49:56.790459  444547 out.go:358] Setting ErrFile to fd 2...
	I0819 18:49:56.790463  444547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:49:56.790638  444547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 18:49:56.791174  444547 out.go:352] Setting JSON to false
	I0819 18:49:56.792114  444547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9148,"bootTime":1724084249,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:49:56.792181  444547 start.go:139] virtualization: kvm guest
	I0819 18:49:56.794648  444547 out.go:177] * [functional-124593] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:49:56.796256  444547 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 18:49:56.796302  444547 notify.go:220] Checking for updates...
	I0819 18:49:56.799145  444547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:49:56.800604  444547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 18:49:56.802061  444547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 18:49:56.803353  444547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:49:56.804793  444547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:49:56.806582  444547 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:49:56.806680  444547 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 18:49:56.807152  444547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:49:56.807235  444547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:49:56.823439  444547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0819 18:49:56.823898  444547 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:49:56.824445  444547 main.go:141] libmachine: Using API Version  1
	I0819 18:49:56.824484  444547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:49:56.824923  444547 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:49:56.825223  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:49:56.864107  444547 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 18:49:56.865533  444547 start.go:297] selected driver: kvm2
	I0819 18:49:56.865559  444547 start.go:901] validating driver "kvm2" against &{Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:49:56.865676  444547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:49:56.866051  444547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:49:56.866145  444547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:49:56.882415  444547 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:49:56.883177  444547 cni.go:84] Creating CNI manager for ""
	I0819 18:49:56.883193  444547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:49:56.883244  444547 start.go:340] cluster config:
	{Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-124593 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:49:56.883396  444547 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:49:56.885199  444547 out.go:177] * Starting "functional-124593" primary control-plane node in "functional-124593" cluster
	I0819 18:49:56.886649  444547 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:49:56.886699  444547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:49:56.886708  444547 cache.go:56] Caching tarball of preloaded images
	I0819 18:49:56.886828  444547 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:49:56.886844  444547 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:49:56.886977  444547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/config.json ...
	I0819 18:49:56.887255  444547 start.go:360] acquireMachinesLock for functional-124593: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:49:56.887316  444547 start.go:364] duration metric: took 31.483µs to acquireMachinesLock for "functional-124593"
	I0819 18:49:56.887333  444547 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:49:56.887345  444547 fix.go:54] fixHost starting: 
	I0819 18:49:56.887711  444547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:49:56.887765  444547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:49:56.903210  444547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43899
	I0819 18:49:56.903686  444547 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:49:56.904263  444547 main.go:141] libmachine: Using API Version  1
	I0819 18:49:56.904298  444547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:49:56.904680  444547 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:49:56.904935  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:49:56.905158  444547 main.go:141] libmachine: (functional-124593) Calling .GetState
	I0819 18:49:56.906833  444547 fix.go:112] recreateIfNeeded on functional-124593: state=Running err=<nil>
	W0819 18:49:56.906856  444547 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:49:56.908782  444547 out.go:177] * Updating the running kvm2 "functional-124593" VM ...
	I0819 18:49:56.910443  444547 machine.go:93] provisionDockerMachine start ...
	I0819 18:49:56.910478  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:49:56.910823  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:56.913259  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:56.913615  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:56.913638  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:56.913753  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:56.914043  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:56.914207  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:56.914341  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:56.914485  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:56.914684  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:56.914697  444547 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:49:57.017550  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-124593
	
	I0819 18:49:57.017585  444547 main.go:141] libmachine: (functional-124593) Calling .GetMachineName
	I0819 18:49:57.017923  444547 buildroot.go:166] provisioning hostname "functional-124593"
	I0819 18:49:57.017956  444547 main.go:141] libmachine: (functional-124593) Calling .GetMachineName
	I0819 18:49:57.018164  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.021185  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.021551  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.021598  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.021780  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.022011  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.022177  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.022309  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.022452  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:57.022654  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:57.022668  444547 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-124593 && echo "functional-124593" | sudo tee /etc/hostname
	I0819 18:49:57.141478  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-124593
	
	I0819 18:49:57.141514  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.144157  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.144414  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.144449  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.144722  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.144969  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.145192  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.145388  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.145570  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:57.145756  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:57.145776  444547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-124593' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-124593/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-124593' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:49:57.249989  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:49:57.250034  444547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 18:49:57.250086  444547 buildroot.go:174] setting up certificates
	I0819 18:49:57.250099  444547 provision.go:84] configureAuth start
	I0819 18:49:57.250118  444547 main.go:141] libmachine: (functional-124593) Calling .GetMachineName
	I0819 18:49:57.250442  444547 main.go:141] libmachine: (functional-124593) Calling .GetIP
	I0819 18:49:57.253181  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.253490  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.253519  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.253712  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.256213  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.256541  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.256586  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.256752  444547 provision.go:143] copyHostCerts
	I0819 18:49:57.256784  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 18:49:57.256824  444547 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 18:49:57.256848  444547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 18:49:57.256918  444547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 18:49:57.257021  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 18:49:57.257043  444547 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 18:49:57.257048  444547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 18:49:57.257071  444547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 18:49:57.257122  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 18:49:57.257160  444547 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 18:49:57.257176  444547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 18:49:57.257198  444547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 18:49:57.257249  444547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.functional-124593 san=[127.0.0.1 192.168.39.22 functional-124593 localhost minikube]
	I0819 18:49:57.505075  444547 provision.go:177] copyRemoteCerts
	I0819 18:49:57.505163  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:49:57.505194  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.508248  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.508654  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.508690  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.508942  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.509160  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.509381  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.509556  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:49:57.591978  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 18:49:57.592075  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 18:49:57.620626  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 18:49:57.620699  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 18:49:57.646085  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 18:49:57.646168  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 18:49:57.671918  444547 provision.go:87] duration metric: took 421.80001ms to configureAuth
	I0819 18:49:57.671954  444547 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:49:57.672176  444547 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:49:57.672267  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.675054  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.675420  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.675456  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.675667  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.675902  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.676057  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.676211  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.676410  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:57.676596  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:57.676611  444547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:50:03.241286  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:50:03.241321  444547 machine.go:96] duration metric: took 6.330855619s to provisionDockerMachine
	I0819 18:50:03.241334  444547 start.go:293] postStartSetup for "functional-124593" (driver="kvm2")
	I0819 18:50:03.241346  444547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:50:03.241368  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.241892  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:50:03.241919  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.244822  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.245262  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.245291  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.245469  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.245716  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.245889  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.246048  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:50:03.327892  444547 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:50:03.332233  444547 command_runner.go:130] > NAME=Buildroot
	I0819 18:50:03.332262  444547 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 18:50:03.332268  444547 command_runner.go:130] > ID=buildroot
	I0819 18:50:03.332276  444547 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 18:50:03.332284  444547 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 18:50:03.332381  444547 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:50:03.332400  444547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 18:50:03.332476  444547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 18:50:03.332579  444547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 18:50:03.332593  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /etc/ssl/certs/4381592.pem
	I0819 18:50:03.332685  444547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/test/nested/copy/438159/hosts -> hosts in /etc/test/nested/copy/438159
	I0819 18:50:03.332692  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/test/nested/copy/438159/hosts -> /etc/test/nested/copy/438159/hosts
	I0819 18:50:03.332732  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/438159
	I0819 18:50:03.343618  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 18:50:03.367775  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/test/nested/copy/438159/hosts --> /etc/test/nested/copy/438159/hosts (40 bytes)
	I0819 18:50:03.392035  444547 start.go:296] duration metric: took 150.684705ms for postStartSetup
	I0819 18:50:03.392093  444547 fix.go:56] duration metric: took 6.504748451s for fixHost
	I0819 18:50:03.392120  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.394902  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.395203  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.395231  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.395450  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.395682  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.395876  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.396030  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.396215  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:50:03.396420  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:50:03.396434  444547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:50:03.498031  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724093403.488650243
	
	I0819 18:50:03.498062  444547 fix.go:216] guest clock: 1724093403.488650243
	I0819 18:50:03.498069  444547 fix.go:229] Guest: 2024-08-19 18:50:03.488650243 +0000 UTC Remote: 2024-08-19 18:50:03.392098301 +0000 UTC m=+6.637869514 (delta=96.551942ms)
	I0819 18:50:03.498115  444547 fix.go:200] guest clock delta is within tolerance: 96.551942ms
	I0819 18:50:03.498121  444547 start.go:83] releasing machines lock for "functional-124593", held for 6.610795712s
	I0819 18:50:03.498146  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.498456  444547 main.go:141] libmachine: (functional-124593) Calling .GetIP
	I0819 18:50:03.501197  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.501685  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.501717  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.501963  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.502567  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.502825  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.502931  444547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:50:03.502977  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.503104  444547 ssh_runner.go:195] Run: cat /version.json
	I0819 18:50:03.503130  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.505641  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.505904  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.505942  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.505982  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.506089  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.506248  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.506286  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.506326  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.506510  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.506529  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.506705  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.506709  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:50:03.506856  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.507023  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:50:03.596444  444547 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 18:50:03.596676  444547 ssh_runner.go:195] Run: systemctl --version
	I0819 18:50:03.642156  444547 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 18:50:03.642205  444547 command_runner.go:130] > systemd 252 (252)
	I0819 18:50:03.642223  444547 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 18:50:03.642284  444547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:50:04.032467  444547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 18:50:04.057730  444547 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 18:50:04.057919  444547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:50:04.058009  444547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:50:04.094792  444547 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 18:50:04.094824  444547 start.go:495] detecting cgroup driver to use...
	I0819 18:50:04.094892  444547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:50:04.216404  444547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:50:04.250117  444547 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:50:04.250182  444547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:50:04.298450  444547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:50:04.329276  444547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:50:04.576464  444547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:50:04.796403  444547 docker.go:233] disabling docker service ...
	I0819 18:50:04.796509  444547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:50:04.824051  444547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:50:04.841929  444547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:50:05.032450  444547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:50:05.230662  444547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:50:05.261270  444547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:50:05.307751  444547 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0819 18:50:05.308002  444547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:50:05.308071  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.325985  444547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:50:05.326072  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.340857  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.355923  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.368797  444547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:50:05.384107  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.396132  444547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.407497  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.421137  444547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:50:05.431493  444547 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 18:50:05.431832  444547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:50:05.444023  444547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:50:05.610160  444547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:51:35.953940  444547 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.343723561s)
	I0819 18:51:35.953984  444547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:51:35.954042  444547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:51:35.958905  444547 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0819 18:51:35.958943  444547 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0819 18:51:35.958954  444547 command_runner.go:130] > Device: 0,22	Inode: 1653        Links: 1
	I0819 18:51:35.958965  444547 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 18:51:35.958973  444547 command_runner.go:130] > Access: 2024-08-19 18:51:35.749660900 +0000
	I0819 18:51:35.958982  444547 command_runner.go:130] > Modify: 2024-08-19 18:51:35.749660900 +0000
	I0819 18:51:35.958993  444547 command_runner.go:130] > Change: 2024-08-19 18:51:35.749660900 +0000
	I0819 18:51:35.958999  444547 command_runner.go:130] >  Birth: -
	I0819 18:51:35.959026  444547 start.go:563] Will wait 60s for crictl version
	I0819 18:51:35.959080  444547 ssh_runner.go:195] Run: which crictl
	I0819 18:51:35.962908  444547 command_runner.go:130] > /usr/bin/crictl
	I0819 18:51:35.963010  444547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:51:35.995379  444547 command_runner.go:130] > Version:  0.1.0
	I0819 18:51:35.995417  444547 command_runner.go:130] > RuntimeName:  cri-o
	I0819 18:51:35.995425  444547 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0819 18:51:35.995433  444547 command_runner.go:130] > RuntimeApiVersion:  v1
	I0819 18:51:35.996527  444547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:51:35.996626  444547 ssh_runner.go:195] Run: crio --version
	I0819 18:51:36.025037  444547 command_runner.go:130] > crio version 1.29.1
	I0819 18:51:36.025067  444547 command_runner.go:130] > Version:        1.29.1
	I0819 18:51:36.025076  444547 command_runner.go:130] > GitCommit:      unknown
	I0819 18:51:36.025082  444547 command_runner.go:130] > GitCommitDate:  unknown
	I0819 18:51:36.025088  444547 command_runner.go:130] > GitTreeState:   clean
	I0819 18:51:36.025097  444547 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 18:51:36.025103  444547 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 18:51:36.025108  444547 command_runner.go:130] > Compiler:       gc
	I0819 18:51:36.025115  444547 command_runner.go:130] > Platform:       linux/amd64
	I0819 18:51:36.025122  444547 command_runner.go:130] > Linkmode:       dynamic
	I0819 18:51:36.025137  444547 command_runner.go:130] > BuildTags:      
	I0819 18:51:36.025142  444547 command_runner.go:130] >   containers_image_ostree_stub
	I0819 18:51:36.025147  444547 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 18:51:36.025151  444547 command_runner.go:130] >   btrfs_noversion
	I0819 18:51:36.025156  444547 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 18:51:36.025161  444547 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 18:51:36.025169  444547 command_runner.go:130] >   seccomp
	I0819 18:51:36.025175  444547 command_runner.go:130] > LDFlags:          unknown
	I0819 18:51:36.025182  444547 command_runner.go:130] > SeccompEnabled:   true
	I0819 18:51:36.025187  444547 command_runner.go:130] > AppArmorEnabled:  false
	I0819 18:51:36.025256  444547 ssh_runner.go:195] Run: crio --version
	I0819 18:51:36.052216  444547 command_runner.go:130] > crio version 1.29.1
	I0819 18:51:36.052240  444547 command_runner.go:130] > Version:        1.29.1
	I0819 18:51:36.052247  444547 command_runner.go:130] > GitCommit:      unknown
	I0819 18:51:36.052252  444547 command_runner.go:130] > GitCommitDate:  unknown
	I0819 18:51:36.052256  444547 command_runner.go:130] > GitTreeState:   clean
	I0819 18:51:36.052261  444547 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 18:51:36.052266  444547 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 18:51:36.052270  444547 command_runner.go:130] > Compiler:       gc
	I0819 18:51:36.052282  444547 command_runner.go:130] > Platform:       linux/amd64
	I0819 18:51:36.052288  444547 command_runner.go:130] > Linkmode:       dynamic
	I0819 18:51:36.052294  444547 command_runner.go:130] > BuildTags:      
	I0819 18:51:36.052301  444547 command_runner.go:130] >   containers_image_ostree_stub
	I0819 18:51:36.052307  444547 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 18:51:36.052317  444547 command_runner.go:130] >   btrfs_noversion
	I0819 18:51:36.052324  444547 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 18:51:36.052333  444547 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 18:51:36.052338  444547 command_runner.go:130] >   seccomp
	I0819 18:51:36.052345  444547 command_runner.go:130] > LDFlags:          unknown
	I0819 18:51:36.052350  444547 command_runner.go:130] > SeccompEnabled:   true
	I0819 18:51:36.052356  444547 command_runner.go:130] > AppArmorEnabled:  false
	I0819 18:51:36.055292  444547 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:51:36.056598  444547 main.go:141] libmachine: (functional-124593) Calling .GetIP
	I0819 18:51:36.059532  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:51:36.059864  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:51:36.059895  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:51:36.060137  444547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:51:36.064416  444547 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0819 18:51:36.064570  444547 kubeadm.go:883] updating cluster {Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:51:36.064698  444547 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:51:36.064782  444547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:51:36.110239  444547 command_runner.go:130] > {
	I0819 18:51:36.110264  444547 command_runner.go:130] >   "images": [
	I0819 18:51:36.110268  444547 command_runner.go:130] >     {
	I0819 18:51:36.110277  444547 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 18:51:36.110281  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110287  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 18:51:36.110290  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110294  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110303  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 18:51:36.110310  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 18:51:36.110314  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110319  444547 command_runner.go:130] >       "size": "87165492",
	I0819 18:51:36.110324  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.110330  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110343  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110350  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110359  444547 command_runner.go:130] >     },
	I0819 18:51:36.110364  444547 command_runner.go:130] >     {
	I0819 18:51:36.110373  444547 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 18:51:36.110391  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110399  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 18:51:36.110402  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110406  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110414  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 18:51:36.110425  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 18:51:36.110432  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110443  444547 command_runner.go:130] >       "size": "31470524",
	I0819 18:51:36.110453  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.110461  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110468  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110477  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110483  444547 command_runner.go:130] >     },
	I0819 18:51:36.110502  444547 command_runner.go:130] >     {
	I0819 18:51:36.110513  444547 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 18:51:36.110522  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110533  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 18:51:36.110539  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110549  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110563  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 18:51:36.110577  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 18:51:36.110586  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110594  444547 command_runner.go:130] >       "size": "61245718",
	I0819 18:51:36.110601  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.110611  444547 command_runner.go:130] >       "username": "nonroot",
	I0819 18:51:36.110621  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110631  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110637  444547 command_runner.go:130] >     },
	I0819 18:51:36.110645  444547 command_runner.go:130] >     {
	I0819 18:51:36.110658  444547 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 18:51:36.110668  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110677  444547 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 18:51:36.110684  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110701  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110715  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 18:51:36.110733  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 18:51:36.110742  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110753  444547 command_runner.go:130] >       "size": "149009664",
	I0819 18:51:36.110760  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.110764  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.110770  444547 command_runner.go:130] >       },
	I0819 18:51:36.110777  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110787  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110797  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110805  444547 command_runner.go:130] >     },
	I0819 18:51:36.110814  444547 command_runner.go:130] >     {
	I0819 18:51:36.110823  444547 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 18:51:36.110832  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110842  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 18:51:36.110849  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110853  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110868  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 18:51:36.110884  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 18:51:36.110893  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110901  444547 command_runner.go:130] >       "size": "95233506",
	I0819 18:51:36.110909  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.110918  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.110927  444547 command_runner.go:130] >       },
	I0819 18:51:36.110934  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110939  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110947  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110956  444547 command_runner.go:130] >     },
	I0819 18:51:36.110965  444547 command_runner.go:130] >     {
	I0819 18:51:36.110978  444547 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 18:51:36.110988  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110999  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 18:51:36.111007  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111013  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111025  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 18:51:36.111040  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 18:51:36.111049  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111060  444547 command_runner.go:130] >       "size": "89437512",
	I0819 18:51:36.111070  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.111080  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.111089  444547 command_runner.go:130] >       },
	I0819 18:51:36.111096  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111104  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111114  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.111122  444547 command_runner.go:130] >     },
	I0819 18:51:36.111128  444547 command_runner.go:130] >     {
	I0819 18:51:36.111140  444547 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 18:51:36.111148  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.111154  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 18:51:36.111163  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111170  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111185  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 18:51:36.111199  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 18:51:36.111206  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111213  444547 command_runner.go:130] >       "size": "92728217",
	I0819 18:51:36.111223  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.111230  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111239  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111246  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.111254  444547 command_runner.go:130] >     },
	I0819 18:51:36.111267  444547 command_runner.go:130] >     {
	I0819 18:51:36.111281  444547 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 18:51:36.111290  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.111299  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 18:51:36.111307  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111313  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111333  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 18:51:36.111345  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 18:51:36.111351  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111355  444547 command_runner.go:130] >       "size": "68420936",
	I0819 18:51:36.111361  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.111365  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.111370  444547 command_runner.go:130] >       },
	I0819 18:51:36.111374  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111381  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111385  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.111389  444547 command_runner.go:130] >     },
	I0819 18:51:36.111393  444547 command_runner.go:130] >     {
	I0819 18:51:36.111399  444547 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 18:51:36.111405  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.111410  444547 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 18:51:36.111415  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111420  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111429  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 18:51:36.111438  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 18:51:36.111442  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111448  444547 command_runner.go:130] >       "size": "742080",
	I0819 18:51:36.111452  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.111456  444547 command_runner.go:130] >         "value": "65535"
	I0819 18:51:36.111460  444547 command_runner.go:130] >       },
	I0819 18:51:36.111464  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111480  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111486  444547 command_runner.go:130] >       "pinned": true
	I0819 18:51:36.111494  444547 command_runner.go:130] >     }
	I0819 18:51:36.111502  444547 command_runner.go:130] >   ]
	I0819 18:51:36.111507  444547 command_runner.go:130] > }
	I0819 18:51:36.111701  444547 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:51:36.111714  444547 crio.go:433] Images already preloaded, skipping extraction
	I0819 18:51:36.111767  444547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:51:36.143806  444547 command_runner.go:130] > {
	I0819 18:51:36.143831  444547 command_runner.go:130] >   "images": [
	I0819 18:51:36.143835  444547 command_runner.go:130] >     {
	I0819 18:51:36.143843  444547 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 18:51:36.143848  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.143854  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 18:51:36.143857  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143861  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.143870  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 18:51:36.143877  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 18:51:36.143883  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143887  444547 command_runner.go:130] >       "size": "87165492",
	I0819 18:51:36.143891  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.143898  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.143904  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.143909  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.143912  444547 command_runner.go:130] >     },
	I0819 18:51:36.143916  444547 command_runner.go:130] >     {
	I0819 18:51:36.143922  444547 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 18:51:36.143929  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.143934  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 18:51:36.143939  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143943  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.143953  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 18:51:36.143960  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 18:51:36.143967  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143978  444547 command_runner.go:130] >       "size": "31470524",
	I0819 18:51:36.143984  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.143992  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144001  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144007  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144016  444547 command_runner.go:130] >     },
	I0819 18:51:36.144021  444547 command_runner.go:130] >     {
	I0819 18:51:36.144036  444547 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 18:51:36.144043  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144048  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 18:51:36.144054  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144058  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144067  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 18:51:36.144085  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 18:51:36.144093  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144100  444547 command_runner.go:130] >       "size": "61245718",
	I0819 18:51:36.144109  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.144119  444547 command_runner.go:130] >       "username": "nonroot",
	I0819 18:51:36.144126  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144134  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144138  444547 command_runner.go:130] >     },
	I0819 18:51:36.144142  444547 command_runner.go:130] >     {
	I0819 18:51:36.144148  444547 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 18:51:36.144154  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144159  444547 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 18:51:36.144162  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144165  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144172  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 18:51:36.144188  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 18:51:36.144197  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144204  444547 command_runner.go:130] >       "size": "149009664",
	I0819 18:51:36.144213  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144220  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144227  444547 command_runner.go:130] >       },
	I0819 18:51:36.144231  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144237  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144243  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144249  444547 command_runner.go:130] >     },
	I0819 18:51:36.144252  444547 command_runner.go:130] >     {
	I0819 18:51:36.144259  444547 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 18:51:36.144267  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144276  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 18:51:36.144285  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144291  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144305  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 18:51:36.144320  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 18:51:36.144327  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144333  444547 command_runner.go:130] >       "size": "95233506",
	I0819 18:51:36.144337  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144341  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144347  444547 command_runner.go:130] >       },
	I0819 18:51:36.144352  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144358  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144365  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144374  444547 command_runner.go:130] >     },
	I0819 18:51:36.144380  444547 command_runner.go:130] >     {
	I0819 18:51:36.144389  444547 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 18:51:36.144399  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144408  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 18:51:36.144419  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144427  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144435  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 18:51:36.144449  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 18:51:36.144471  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144501  444547 command_runner.go:130] >       "size": "89437512",
	I0819 18:51:36.144507  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144516  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144521  444547 command_runner.go:130] >       },
	I0819 18:51:36.144526  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144532  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144541  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144547  444547 command_runner.go:130] >     },
	I0819 18:51:36.144558  444547 command_runner.go:130] >     {
	I0819 18:51:36.144568  444547 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 18:51:36.144577  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144585  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 18:51:36.144593  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144600  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144611  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 18:51:36.144623  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 18:51:36.144632  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144640  444547 command_runner.go:130] >       "size": "92728217",
	I0819 18:51:36.144649  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.144656  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144663  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144669  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144677  444547 command_runner.go:130] >     },
	I0819 18:51:36.144682  444547 command_runner.go:130] >     {
	I0819 18:51:36.144694  444547 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 18:51:36.144704  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144716  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 18:51:36.144725  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144734  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144755  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 18:51:36.144768  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 18:51:36.144775  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144780  444547 command_runner.go:130] >       "size": "68420936",
	I0819 18:51:36.144789  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144798  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144807  444547 command_runner.go:130] >       },
	I0819 18:51:36.144816  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144826  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144835  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144843  444547 command_runner.go:130] >     },
	I0819 18:51:36.144849  444547 command_runner.go:130] >     {
	I0819 18:51:36.144864  444547 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 18:51:36.144873  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144882  444547 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 18:51:36.144892  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144901  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144912  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 18:51:36.144926  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 18:51:36.144934  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144940  444547 command_runner.go:130] >       "size": "742080",
	I0819 18:51:36.144944  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144950  444547 command_runner.go:130] >         "value": "65535"
	I0819 18:51:36.144958  444547 command_runner.go:130] >       },
	I0819 18:51:36.144968  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144979  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144988  444547 command_runner.go:130] >       "pinned": true
	I0819 18:51:36.144995  444547 command_runner.go:130] >     }
	I0819 18:51:36.145001  444547 command_runner.go:130] >   ]
	I0819 18:51:36.145008  444547 command_runner.go:130] > }
	I0819 18:51:36.145182  444547 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:51:36.145198  444547 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:51:36.145207  444547 kubeadm.go:934] updating node { 192.168.39.22 8441 v1.31.0 crio true true} ...
	I0819 18:51:36.145347  444547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-124593 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:51:36.145440  444547 ssh_runner.go:195] Run: crio config
	I0819 18:51:36.185689  444547 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0819 18:51:36.185722  444547 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0819 18:51:36.185733  444547 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0819 18:51:36.185738  444547 command_runner.go:130] > #
	I0819 18:51:36.185763  444547 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0819 18:51:36.185772  444547 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0819 18:51:36.185782  444547 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0819 18:51:36.185794  444547 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0819 18:51:36.185800  444547 command_runner.go:130] > # reload'.
	I0819 18:51:36.185810  444547 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0819 18:51:36.185824  444547 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0819 18:51:36.185834  444547 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0819 18:51:36.185851  444547 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0819 18:51:36.185857  444547 command_runner.go:130] > [crio]
	I0819 18:51:36.185867  444547 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0819 18:51:36.185878  444547 command_runner.go:130] > # containers images, in this directory.
	I0819 18:51:36.185886  444547 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0819 18:51:36.185906  444547 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0819 18:51:36.185916  444547 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0819 18:51:36.185927  444547 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0819 18:51:36.185937  444547 command_runner.go:130] > # imagestore = ""
	I0819 18:51:36.185947  444547 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0819 18:51:36.185960  444547 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0819 18:51:36.185968  444547 command_runner.go:130] > storage_driver = "overlay"
	I0819 18:51:36.185979  444547 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0819 18:51:36.185990  444547 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0819 18:51:36.186001  444547 command_runner.go:130] > storage_option = [
	I0819 18:51:36.186010  444547 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0819 18:51:36.186018  444547 command_runner.go:130] > ]
	I0819 18:51:36.186029  444547 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0819 18:51:36.186041  444547 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0819 18:51:36.186052  444547 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0819 18:51:36.186068  444547 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0819 18:51:36.186082  444547 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0819 18:51:36.186092  444547 command_runner.go:130] > # always happen on a node reboot
	I0819 18:51:36.186103  444547 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0819 18:51:36.186124  444547 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0819 18:51:36.186136  444547 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0819 18:51:36.186147  444547 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0819 18:51:36.186155  444547 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0819 18:51:36.186168  444547 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0819 18:51:36.186183  444547 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0819 18:51:36.186193  444547 command_runner.go:130] > # internal_wipe = true
	I0819 18:51:36.186206  444547 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0819 18:51:36.186217  444547 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0819 18:51:36.186227  444547 command_runner.go:130] > # internal_repair = false
	I0819 18:51:36.186235  444547 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0819 18:51:36.186247  444547 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0819 18:51:36.186256  444547 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0819 18:51:36.186268  444547 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0819 18:51:36.186303  444547 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0819 18:51:36.186317  444547 command_runner.go:130] > [crio.api]
	I0819 18:51:36.186326  444547 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0819 18:51:36.186333  444547 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0819 18:51:36.186342  444547 command_runner.go:130] > # IP address on which the stream server will listen.
	I0819 18:51:36.186353  444547 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0819 18:51:36.186363  444547 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0819 18:51:36.186374  444547 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0819 18:51:36.186386  444547 command_runner.go:130] > # stream_port = "0"
	I0819 18:51:36.186395  444547 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0819 18:51:36.186402  444547 command_runner.go:130] > # stream_enable_tls = false
	I0819 18:51:36.186409  444547 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0819 18:51:36.186418  444547 command_runner.go:130] > # stream_idle_timeout = ""
	I0819 18:51:36.186429  444547 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0819 18:51:36.186441  444547 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0819 18:51:36.186450  444547 command_runner.go:130] > # minutes.
	I0819 18:51:36.186457  444547 command_runner.go:130] > # stream_tls_cert = ""
	I0819 18:51:36.186468  444547 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0819 18:51:36.186486  444547 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0819 18:51:36.186498  444547 command_runner.go:130] > # stream_tls_key = ""
	I0819 18:51:36.186511  444547 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0819 18:51:36.186523  444547 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0819 18:51:36.186547  444547 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0819 18:51:36.186556  444547 command_runner.go:130] > # stream_tls_ca = ""
	I0819 18:51:36.186567  444547 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 18:51:36.186578  444547 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0819 18:51:36.186589  444547 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 18:51:36.186600  444547 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0819 18:51:36.186610  444547 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0819 18:51:36.186622  444547 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0819 18:51:36.186629  444547 command_runner.go:130] > [crio.runtime]
	I0819 18:51:36.186639  444547 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0819 18:51:36.186650  444547 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0819 18:51:36.186659  444547 command_runner.go:130] > # "nofile=1024:2048"
	I0819 18:51:36.186670  444547 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0819 18:51:36.186674  444547 command_runner.go:130] > # default_ulimits = [
	I0819 18:51:36.186678  444547 command_runner.go:130] > # ]
	I0819 18:51:36.186687  444547 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0819 18:51:36.186701  444547 command_runner.go:130] > # no_pivot = false
	I0819 18:51:36.186714  444547 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0819 18:51:36.186727  444547 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0819 18:51:36.186738  444547 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0819 18:51:36.186747  444547 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0819 18:51:36.186758  444547 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0819 18:51:36.186773  444547 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 18:51:36.186783  444547 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0819 18:51:36.186791  444547 command_runner.go:130] > # Cgroup setting for conmon
	I0819 18:51:36.186805  444547 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0819 18:51:36.186814  444547 command_runner.go:130] > conmon_cgroup = "pod"
	I0819 18:51:36.186824  444547 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0819 18:51:36.186834  444547 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0819 18:51:36.186845  444547 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 18:51:36.186855  444547 command_runner.go:130] > conmon_env = [
	I0819 18:51:36.186864  444547 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 18:51:36.186872  444547 command_runner.go:130] > ]
	I0819 18:51:36.186881  444547 command_runner.go:130] > # Additional environment variables to set for all the
	I0819 18:51:36.186891  444547 command_runner.go:130] > # containers. These are overridden if set in the
	I0819 18:51:36.186902  444547 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0819 18:51:36.186911  444547 command_runner.go:130] > # default_env = [
	I0819 18:51:36.186916  444547 command_runner.go:130] > # ]
	I0819 18:51:36.186957  444547 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0819 18:51:36.186977  444547 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0819 18:51:36.186983  444547 command_runner.go:130] > # selinux = false
	I0819 18:51:36.186992  444547 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0819 18:51:36.187004  444547 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0819 18:51:36.187019  444547 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0819 18:51:36.187029  444547 command_runner.go:130] > # seccomp_profile = ""
	I0819 18:51:36.187038  444547 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0819 18:51:36.187049  444547 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0819 18:51:36.187059  444547 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0819 18:51:36.187069  444547 command_runner.go:130] > # which might increase security.
	I0819 18:51:36.187074  444547 command_runner.go:130] > # This option is currently deprecated,
	I0819 18:51:36.187084  444547 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0819 18:51:36.187095  444547 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0819 18:51:36.187107  444547 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0819 18:51:36.187127  444547 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0819 18:51:36.187139  444547 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0819 18:51:36.187152  444547 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0819 18:51:36.187160  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.187167  444547 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0819 18:51:36.187178  444547 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0819 18:51:36.187188  444547 command_runner.go:130] > # the cgroup blockio controller.
	I0819 18:51:36.187200  444547 command_runner.go:130] > # blockio_config_file = ""
	I0819 18:51:36.187214  444547 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0819 18:51:36.187224  444547 command_runner.go:130] > # blockio parameters.
	I0819 18:51:36.187231  444547 command_runner.go:130] > # blockio_reload = false
	I0819 18:51:36.187241  444547 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0819 18:51:36.187250  444547 command_runner.go:130] > # irqbalance daemon.
	I0819 18:51:36.187259  444547 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0819 18:51:36.187271  444547 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0819 18:51:36.187285  444547 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0819 18:51:36.187297  444547 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0819 18:51:36.187309  444547 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0819 18:51:36.187322  444547 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0819 18:51:36.187332  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.187344  444547 command_runner.go:130] > # rdt_config_file = ""
	I0819 18:51:36.187353  444547 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0819 18:51:36.187363  444547 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0819 18:51:36.187390  444547 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0819 18:51:36.187400  444547 command_runner.go:130] > # separate_pull_cgroup = ""
	I0819 18:51:36.187410  444547 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0819 18:51:36.187425  444547 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0819 18:51:36.187435  444547 command_runner.go:130] > # will be added.
	I0819 18:51:36.187442  444547 command_runner.go:130] > # default_capabilities = [
	I0819 18:51:36.187451  444547 command_runner.go:130] > # 	"CHOWN",
	I0819 18:51:36.187458  444547 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0819 18:51:36.187466  444547 command_runner.go:130] > # 	"FSETID",
	I0819 18:51:36.187476  444547 command_runner.go:130] > # 	"FOWNER",
	I0819 18:51:36.187484  444547 command_runner.go:130] > # 	"SETGID",
	I0819 18:51:36.187490  444547 command_runner.go:130] > # 	"SETUID",
	I0819 18:51:36.187499  444547 command_runner.go:130] > # 	"SETPCAP",
	I0819 18:51:36.187506  444547 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0819 18:51:36.187516  444547 command_runner.go:130] > # 	"KILL",
	I0819 18:51:36.187521  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187536  444547 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0819 18:51:36.187549  444547 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0819 18:51:36.187564  444547 command_runner.go:130] > # add_inheritable_capabilities = false
	I0819 18:51:36.187577  444547 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0819 18:51:36.187588  444547 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 18:51:36.187595  444547 command_runner.go:130] > default_sysctls = [
	I0819 18:51:36.187599  444547 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0819 18:51:36.187602  444547 command_runner.go:130] > ]
	I0819 18:51:36.187607  444547 command_runner.go:130] > # List of devices on the host that a
	I0819 18:51:36.187613  444547 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0819 18:51:36.187617  444547 command_runner.go:130] > # allowed_devices = [
	I0819 18:51:36.187621  444547 command_runner.go:130] > # 	"/dev/fuse",
	I0819 18:51:36.187626  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187637  444547 command_runner.go:130] > # List of additional devices. specified as
	I0819 18:51:36.187650  444547 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0819 18:51:36.187663  444547 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0819 18:51:36.187675  444547 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 18:51:36.187685  444547 command_runner.go:130] > # additional_devices = [
	I0819 18:51:36.187690  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187699  444547 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0819 18:51:36.187703  444547 command_runner.go:130] > # cdi_spec_dirs = [
	I0819 18:51:36.187707  444547 command_runner.go:130] > # 	"/etc/cdi",
	I0819 18:51:36.187711  444547 command_runner.go:130] > # 	"/var/run/cdi",
	I0819 18:51:36.187715  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187721  444547 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0819 18:51:36.187729  444547 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0819 18:51:36.187735  444547 command_runner.go:130] > # Defaults to false.
	I0819 18:51:36.187739  444547 command_runner.go:130] > # device_ownership_from_security_context = false
	I0819 18:51:36.187746  444547 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0819 18:51:36.187753  444547 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0819 18:51:36.187756  444547 command_runner.go:130] > # hooks_dir = [
	I0819 18:51:36.187761  444547 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0819 18:51:36.187766  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187775  444547 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0819 18:51:36.187788  444547 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0819 18:51:36.187800  444547 command_runner.go:130] > # its default mounts from the following two files:
	I0819 18:51:36.187808  444547 command_runner.go:130] > #
	I0819 18:51:36.187819  444547 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0819 18:51:36.187831  444547 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0819 18:51:36.187841  444547 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0819 18:51:36.187846  444547 command_runner.go:130] > #
	I0819 18:51:36.187856  444547 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0819 18:51:36.187870  444547 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0819 18:51:36.187887  444547 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0819 18:51:36.187899  444547 command_runner.go:130] > #      only add mounts it finds in this file.
	I0819 18:51:36.187907  444547 command_runner.go:130] > #
	I0819 18:51:36.187915  444547 command_runner.go:130] > # default_mounts_file = ""
	I0819 18:51:36.187927  444547 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0819 18:51:36.187940  444547 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0819 18:51:36.187948  444547 command_runner.go:130] > pids_limit = 1024
	I0819 18:51:36.187961  444547 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0819 18:51:36.187976  444547 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0819 18:51:36.187989  444547 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0819 18:51:36.188004  444547 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0819 18:51:36.188020  444547 command_runner.go:130] > # log_size_max = -1
	I0819 18:51:36.188034  444547 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0819 18:51:36.188043  444547 command_runner.go:130] > # log_to_journald = false
	I0819 18:51:36.188053  444547 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0819 18:51:36.188064  444547 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0819 18:51:36.188076  444547 command_runner.go:130] > # Path to directory for container attach sockets.
	I0819 18:51:36.188084  444547 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0819 18:51:36.188095  444547 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0819 18:51:36.188103  444547 command_runner.go:130] > # bind_mount_prefix = ""
	I0819 18:51:36.188113  444547 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0819 18:51:36.188123  444547 command_runner.go:130] > # read_only = false
	I0819 18:51:36.188133  444547 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0819 18:51:36.188144  444547 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0819 18:51:36.188151  444547 command_runner.go:130] > # live configuration reload.
	I0819 18:51:36.188161  444547 command_runner.go:130] > # log_level = "info"
	I0819 18:51:36.188171  444547 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0819 18:51:36.188182  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.188190  444547 command_runner.go:130] > # log_filter = ""
	I0819 18:51:36.188199  444547 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0819 18:51:36.188216  444547 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0819 18:51:36.188225  444547 command_runner.go:130] > # separated by comma.
	I0819 18:51:36.188237  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188247  444547 command_runner.go:130] > # uid_mappings = ""
	I0819 18:51:36.188257  444547 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0819 18:51:36.188269  444547 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0819 18:51:36.188278  444547 command_runner.go:130] > # separated by comma.
	I0819 18:51:36.188293  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188303  444547 command_runner.go:130] > # gid_mappings = ""
	I0819 18:51:36.188313  444547 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0819 18:51:36.188325  444547 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 18:51:36.188337  444547 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 18:51:36.188351  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188359  444547 command_runner.go:130] > # minimum_mappable_uid = -1
	I0819 18:51:36.188366  444547 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0819 18:51:36.188375  444547 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 18:51:36.188381  444547 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 18:51:36.188390  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188394  444547 command_runner.go:130] > # minimum_mappable_gid = -1
	I0819 18:51:36.188402  444547 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0819 18:51:36.188408  444547 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0819 18:51:36.188415  444547 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0819 18:51:36.188419  444547 command_runner.go:130] > # ctr_stop_timeout = 30
	I0819 18:51:36.188424  444547 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0819 18:51:36.188430  444547 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0819 18:51:36.188437  444547 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0819 18:51:36.188441  444547 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0819 18:51:36.188445  444547 command_runner.go:130] > drop_infra_ctr = false
	I0819 18:51:36.188451  444547 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0819 18:51:36.188458  444547 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0819 18:51:36.188465  444547 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0819 18:51:36.188471  444547 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0819 18:51:36.188482  444547 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0819 18:51:36.188489  444547 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0819 18:51:36.188495  444547 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0819 18:51:36.188502  444547 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0819 18:51:36.188506  444547 command_runner.go:130] > # shared_cpuset = ""
	I0819 18:51:36.188514  444547 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0819 18:51:36.188519  444547 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0819 18:51:36.188524  444547 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0819 18:51:36.188531  444547 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0819 18:51:36.188537  444547 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0819 18:51:36.188549  444547 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0819 18:51:36.188561  444547 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0819 18:51:36.188571  444547 command_runner.go:130] > # enable_criu_support = false
	I0819 18:51:36.188579  444547 command_runner.go:130] > # Enable/disable the generation of the container,
	I0819 18:51:36.188591  444547 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0819 18:51:36.188598  444547 command_runner.go:130] > # enable_pod_events = false
	I0819 18:51:36.188604  444547 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 18:51:36.188613  444547 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 18:51:36.188620  444547 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0819 18:51:36.188626  444547 command_runner.go:130] > # default_runtime = "runc"
	I0819 18:51:36.188631  444547 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0819 18:51:36.188638  444547 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0819 18:51:36.188649  444547 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0819 18:51:36.188656  444547 command_runner.go:130] > # creation as a file is not desired either.
	I0819 18:51:36.188664  444547 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0819 18:51:36.188671  444547 command_runner.go:130] > # the hostname is being managed dynamically.
	I0819 18:51:36.188675  444547 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0819 18:51:36.188681  444547 command_runner.go:130] > # ]
	I0819 18:51:36.188686  444547 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0819 18:51:36.188694  444547 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0819 18:51:36.188700  444547 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0819 18:51:36.188708  444547 command_runner.go:130] > # Each entry in the table should follow the format:
	I0819 18:51:36.188711  444547 command_runner.go:130] > #
	I0819 18:51:36.188716  444547 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0819 18:51:36.188720  444547 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0819 18:51:36.188744  444547 command_runner.go:130] > # runtime_type = "oci"
	I0819 18:51:36.188752  444547 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0819 18:51:36.188757  444547 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0819 18:51:36.188763  444547 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0819 18:51:36.188768  444547 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0819 18:51:36.188774  444547 command_runner.go:130] > # monitor_env = []
	I0819 18:51:36.188778  444547 command_runner.go:130] > # privileged_without_host_devices = false
	I0819 18:51:36.188782  444547 command_runner.go:130] > # allowed_annotations = []
	I0819 18:51:36.188790  444547 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0819 18:51:36.188795  444547 command_runner.go:130] > # Where:
	I0819 18:51:36.188800  444547 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0819 18:51:36.188806  444547 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0819 18:51:36.188813  444547 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0819 18:51:36.188822  444547 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0819 18:51:36.188828  444547 command_runner.go:130] > #   in $PATH.
	I0819 18:51:36.188834  444547 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0819 18:51:36.188839  444547 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0819 18:51:36.188845  444547 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0819 18:51:36.188851  444547 command_runner.go:130] > #   state.
	I0819 18:51:36.188858  444547 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0819 18:51:36.188865  444547 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0819 18:51:36.188871  444547 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0819 18:51:36.188879  444547 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0819 18:51:36.188885  444547 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0819 18:51:36.188893  444547 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0819 18:51:36.188898  444547 command_runner.go:130] > #   The currently recognized values are:
	I0819 18:51:36.188904  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0819 18:51:36.188911  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0819 18:51:36.188917  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0819 18:51:36.188925  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0819 18:51:36.188934  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0819 18:51:36.188940  444547 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0819 18:51:36.188948  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0819 18:51:36.188954  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0819 18:51:36.188962  444547 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0819 18:51:36.188968  444547 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0819 18:51:36.188972  444547 command_runner.go:130] > #   deprecated option "conmon".
	I0819 18:51:36.188979  444547 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0819 18:51:36.188985  444547 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0819 18:51:36.188992  444547 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0819 18:51:36.188998  444547 command_runner.go:130] > #   should be moved to the container's cgroup
	I0819 18:51:36.189006  444547 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0819 18:51:36.189013  444547 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0819 18:51:36.189019  444547 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0819 18:51:36.189026  444547 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0819 18:51:36.189031  444547 command_runner.go:130] > #
	I0819 18:51:36.189041  444547 command_runner.go:130] > # Using the seccomp notifier feature:
	I0819 18:51:36.189044  444547 command_runner.go:130] > #
	I0819 18:51:36.189051  444547 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0819 18:51:36.189058  444547 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0819 18:51:36.189062  444547 command_runner.go:130] > #
	I0819 18:51:36.189070  444547 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0819 18:51:36.189078  444547 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0819 18:51:36.189082  444547 command_runner.go:130] > #
	I0819 18:51:36.189089  444547 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0819 18:51:36.189095  444547 command_runner.go:130] > # feature.
	I0819 18:51:36.189100  444547 command_runner.go:130] > #
	I0819 18:51:36.189106  444547 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0819 18:51:36.189114  444547 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0819 18:51:36.189120  444547 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0819 18:51:36.189127  444547 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0819 18:51:36.189146  444547 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0819 18:51:36.189154  444547 command_runner.go:130] > #
	I0819 18:51:36.189163  444547 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0819 18:51:36.189174  444547 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0819 18:51:36.189178  444547 command_runner.go:130] > #
	I0819 18:51:36.189184  444547 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0819 18:51:36.189192  444547 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0819 18:51:36.189195  444547 command_runner.go:130] > #
	I0819 18:51:36.189203  444547 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0819 18:51:36.189209  444547 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0819 18:51:36.189214  444547 command_runner.go:130] > # limitation.
	I0819 18:51:36.189220  444547 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0819 18:51:36.189226  444547 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0819 18:51:36.189230  444547 command_runner.go:130] > runtime_type = "oci"
	I0819 18:51:36.189234  444547 command_runner.go:130] > runtime_root = "/run/runc"
	I0819 18:51:36.189240  444547 command_runner.go:130] > runtime_config_path = ""
	I0819 18:51:36.189244  444547 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0819 18:51:36.189248  444547 command_runner.go:130] > monitor_cgroup = "pod"
	I0819 18:51:36.189252  444547 command_runner.go:130] > monitor_exec_cgroup = ""
	I0819 18:51:36.189256  444547 command_runner.go:130] > monitor_env = [
	I0819 18:51:36.189261  444547 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 18:51:36.189266  444547 command_runner.go:130] > ]
	I0819 18:51:36.189270  444547 command_runner.go:130] > privileged_without_host_devices = false
	I0819 18:51:36.189278  444547 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0819 18:51:36.189283  444547 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0819 18:51:36.189291  444547 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0819 18:51:36.189302  444547 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0819 18:51:36.189311  444547 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0819 18:51:36.189317  444547 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0819 18:51:36.189328  444547 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0819 18:51:36.189339  444547 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0819 18:51:36.189346  444547 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0819 18:51:36.189353  444547 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0819 18:51:36.189358  444547 command_runner.go:130] > # Example:
	I0819 18:51:36.189363  444547 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0819 18:51:36.189370  444547 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0819 18:51:36.189374  444547 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0819 18:51:36.189382  444547 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0819 18:51:36.189386  444547 command_runner.go:130] > # cpuset = 0
	I0819 18:51:36.189393  444547 command_runner.go:130] > # cpushares = "0-1"
	I0819 18:51:36.189396  444547 command_runner.go:130] > # Where:
	I0819 18:51:36.189401  444547 command_runner.go:130] > # The workload name is workload-type.
	I0819 18:51:36.189409  444547 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0819 18:51:36.189415  444547 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0819 18:51:36.189422  444547 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0819 18:51:36.189430  444547 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0819 18:51:36.189437  444547 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0819 18:51:36.189442  444547 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0819 18:51:36.189449  444547 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0819 18:51:36.189455  444547 command_runner.go:130] > # Default value is set to true
	I0819 18:51:36.189459  444547 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0819 18:51:36.189469  444547 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0819 18:51:36.189478  444547 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0819 18:51:36.189484  444547 command_runner.go:130] > # Default value is set to 'false'
	I0819 18:51:36.189489  444547 command_runner.go:130] > # disable_hostport_mapping = false
	I0819 18:51:36.189497  444547 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0819 18:51:36.189500  444547 command_runner.go:130] > #
	I0819 18:51:36.189505  444547 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0819 18:51:36.189513  444547 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0819 18:51:36.189519  444547 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0819 18:51:36.189528  444547 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0819 18:51:36.189536  444547 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0819 18:51:36.189542  444547 command_runner.go:130] > [crio.image]
	I0819 18:51:36.189548  444547 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0819 18:51:36.189554  444547 command_runner.go:130] > # default_transport = "docker://"
	I0819 18:51:36.189560  444547 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0819 18:51:36.189569  444547 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0819 18:51:36.189574  444547 command_runner.go:130] > # global_auth_file = ""
	I0819 18:51:36.189578  444547 command_runner.go:130] > # The image used to instantiate infra containers.
	I0819 18:51:36.189583  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.189590  444547 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0819 18:51:36.189596  444547 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0819 18:51:36.189604  444547 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0819 18:51:36.189609  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.189615  444547 command_runner.go:130] > # pause_image_auth_file = ""
	I0819 18:51:36.189620  444547 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0819 18:51:36.189626  444547 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0819 18:51:36.189632  444547 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0819 18:51:36.189639  444547 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0819 18:51:36.189643  444547 command_runner.go:130] > # pause_command = "/pause"
	I0819 18:51:36.189649  444547 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0819 18:51:36.189655  444547 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0819 18:51:36.189660  444547 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0819 18:51:36.189670  444547 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0819 18:51:36.189678  444547 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0819 18:51:36.189684  444547 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0819 18:51:36.189690  444547 command_runner.go:130] > # pinned_images = [
	I0819 18:51:36.189693  444547 command_runner.go:130] > # ]
	I0819 18:51:36.189700  444547 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0819 18:51:36.189707  444547 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0819 18:51:36.189713  444547 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0819 18:51:36.189721  444547 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0819 18:51:36.189726  444547 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0819 18:51:36.189732  444547 command_runner.go:130] > # signature_policy = ""
	I0819 18:51:36.189737  444547 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0819 18:51:36.189744  444547 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0819 18:51:36.189754  444547 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0819 18:51:36.189762  444547 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0819 18:51:36.189770  444547 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0819 18:51:36.189775  444547 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0819 18:51:36.189781  444547 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0819 18:51:36.189786  444547 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0819 18:51:36.189791  444547 command_runner.go:130] > # changing them here.
	I0819 18:51:36.189795  444547 command_runner.go:130] > # insecure_registries = [
	I0819 18:51:36.189798  444547 command_runner.go:130] > # ]
	I0819 18:51:36.189804  444547 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0819 18:51:36.189808  444547 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0819 18:51:36.189812  444547 command_runner.go:130] > # image_volumes = "mkdir"
	I0819 18:51:36.189816  444547 command_runner.go:130] > # Temporary directory to use for storing big files
	I0819 18:51:36.189820  444547 command_runner.go:130] > # big_files_temporary_dir = ""
	I0819 18:51:36.189826  444547 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0819 18:51:36.189829  444547 command_runner.go:130] > # CNI plugins.
	I0819 18:51:36.189832  444547 command_runner.go:130] > [crio.network]
	I0819 18:51:36.189838  444547 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0819 18:51:36.189842  444547 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0819 18:51:36.189847  444547 command_runner.go:130] > # cni_default_network = ""
	I0819 18:51:36.189851  444547 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0819 18:51:36.189855  444547 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0819 18:51:36.189860  444547 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0819 18:51:36.189863  444547 command_runner.go:130] > # plugin_dirs = [
	I0819 18:51:36.189867  444547 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0819 18:51:36.189870  444547 command_runner.go:130] > # ]
	I0819 18:51:36.189875  444547 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0819 18:51:36.189879  444547 command_runner.go:130] > [crio.metrics]
	I0819 18:51:36.189883  444547 command_runner.go:130] > # Globally enable or disable metrics support.
	I0819 18:51:36.189887  444547 command_runner.go:130] > enable_metrics = true
	I0819 18:51:36.189891  444547 command_runner.go:130] > # Specify enabled metrics collectors.
	I0819 18:51:36.189895  444547 command_runner.go:130] > # Per default all metrics are enabled.
	I0819 18:51:36.189900  444547 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0819 18:51:36.189906  444547 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0819 18:51:36.189911  444547 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0819 18:51:36.189915  444547 command_runner.go:130] > # metrics_collectors = [
	I0819 18:51:36.189918  444547 command_runner.go:130] > # 	"operations",
	I0819 18:51:36.189923  444547 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0819 18:51:36.189927  444547 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0819 18:51:36.189931  444547 command_runner.go:130] > # 	"operations_errors",
	I0819 18:51:36.189935  444547 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0819 18:51:36.189938  444547 command_runner.go:130] > # 	"image_pulls_by_name",
	I0819 18:51:36.189946  444547 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0819 18:51:36.189950  444547 command_runner.go:130] > # 	"image_pulls_failures",
	I0819 18:51:36.189954  444547 command_runner.go:130] > # 	"image_pulls_successes",
	I0819 18:51:36.189958  444547 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0819 18:51:36.189962  444547 command_runner.go:130] > # 	"image_layer_reuse",
	I0819 18:51:36.189970  444547 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0819 18:51:36.189973  444547 command_runner.go:130] > # 	"containers_oom_total",
	I0819 18:51:36.189977  444547 command_runner.go:130] > # 	"containers_oom",
	I0819 18:51:36.189980  444547 command_runner.go:130] > # 	"processes_defunct",
	I0819 18:51:36.189984  444547 command_runner.go:130] > # 	"operations_total",
	I0819 18:51:36.189988  444547 command_runner.go:130] > # 	"operations_latency_seconds",
	I0819 18:51:36.189993  444547 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0819 18:51:36.189997  444547 command_runner.go:130] > # 	"operations_errors_total",
	I0819 18:51:36.190001  444547 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0819 18:51:36.190005  444547 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0819 18:51:36.190009  444547 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0819 18:51:36.190013  444547 command_runner.go:130] > # 	"image_pulls_success_total",
	I0819 18:51:36.190017  444547 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0819 18:51:36.190021  444547 command_runner.go:130] > # 	"containers_oom_count_total",
	I0819 18:51:36.190026  444547 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0819 18:51:36.190033  444547 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0819 18:51:36.190035  444547 command_runner.go:130] > # ]
	I0819 18:51:36.190040  444547 command_runner.go:130] > # The port on which the metrics server will listen.
	I0819 18:51:36.190046  444547 command_runner.go:130] > # metrics_port = 9090
	I0819 18:51:36.190051  444547 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0819 18:51:36.190055  444547 command_runner.go:130] > # metrics_socket = ""
	I0819 18:51:36.190061  444547 command_runner.go:130] > # The certificate for the secure metrics server.
	I0819 18:51:36.190069  444547 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0819 18:51:36.190075  444547 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0819 18:51:36.190082  444547 command_runner.go:130] > # certificate on any modification event.
	I0819 18:51:36.190085  444547 command_runner.go:130] > # metrics_cert = ""
	I0819 18:51:36.190090  444547 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0819 18:51:36.190097  444547 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0819 18:51:36.190101  444547 command_runner.go:130] > # metrics_key = ""
	I0819 18:51:36.190106  444547 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0819 18:51:36.190110  444547 command_runner.go:130] > [crio.tracing]
	I0819 18:51:36.190117  444547 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0819 18:51:36.190124  444547 command_runner.go:130] > # enable_tracing = false
	I0819 18:51:36.190129  444547 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0819 18:51:36.190135  444547 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0819 18:51:36.190142  444547 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0819 18:51:36.190147  444547 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0819 18:51:36.190151  444547 command_runner.go:130] > # CRI-O NRI configuration.
	I0819 18:51:36.190154  444547 command_runner.go:130] > [crio.nri]
	I0819 18:51:36.190158  444547 command_runner.go:130] > # Globally enable or disable NRI.
	I0819 18:51:36.190167  444547 command_runner.go:130] > # enable_nri = false
	I0819 18:51:36.190172  444547 command_runner.go:130] > # NRI socket to listen on.
	I0819 18:51:36.190177  444547 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0819 18:51:36.190183  444547 command_runner.go:130] > # NRI plugin directory to use.
	I0819 18:51:36.190188  444547 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0819 18:51:36.190194  444547 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0819 18:51:36.190198  444547 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0819 18:51:36.190205  444547 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0819 18:51:36.190209  444547 command_runner.go:130] > # nri_disable_connections = false
	I0819 18:51:36.190217  444547 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0819 18:51:36.190221  444547 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0819 18:51:36.190228  444547 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0819 18:51:36.190233  444547 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0819 18:51:36.190238  444547 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0819 18:51:36.190243  444547 command_runner.go:130] > [crio.stats]
	I0819 18:51:36.190249  444547 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0819 18:51:36.190255  444547 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0819 18:51:36.190259  444547 command_runner.go:130] > # stats_collection_period = 0
	I0819 18:51:36.190450  444547 command_runner.go:130] ! time="2024-08-19 18:51:36.161529726Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0819 18:51:36.190501  444547 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0819 18:51:36.190630  444547 cni.go:84] Creating CNI manager for ""
	I0819 18:51:36.190641  444547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:51:36.190651  444547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:51:36.190674  444547 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.22 APIServerPort:8441 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-124593 NodeName:functional-124593 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:51:36.190815  444547 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.22
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-124593"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:51:36.190886  444547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:51:36.200955  444547 command_runner.go:130] > kubeadm
	I0819 18:51:36.200981  444547 command_runner.go:130] > kubectl
	I0819 18:51:36.200986  444547 command_runner.go:130] > kubelet
	I0819 18:51:36.201016  444547 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:51:36.201072  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 18:51:36.211041  444547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0819 18:51:36.228264  444547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:51:36.245722  444547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0819 18:51:36.263018  444547 ssh_runner.go:195] Run: grep 192.168.39.22	control-plane.minikube.internal$ /etc/hosts
	I0819 18:51:36.267130  444547 command_runner.go:130] > 192.168.39.22	control-plane.minikube.internal
	I0819 18:51:36.267229  444547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:51:36.398107  444547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:51:36.412895  444547 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593 for IP: 192.168.39.22
	I0819 18:51:36.412924  444547 certs.go:194] generating shared ca certs ...
	I0819 18:51:36.412943  444547 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:51:36.413154  444547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 18:51:36.413203  444547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 18:51:36.413217  444547 certs.go:256] generating profile certs ...
	I0819 18:51:36.413317  444547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.key
	I0819 18:51:36.413414  444547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.key.aa5a99d1
	I0819 18:51:36.413463  444547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.key
	I0819 18:51:36.413478  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 18:51:36.413496  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 18:51:36.413514  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 18:51:36.413543  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 18:51:36.413558  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 18:51:36.413577  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 18:51:36.413596  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 18:51:36.413612  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 18:51:36.413684  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 18:51:36.413728  444547 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 18:51:36.413741  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 18:51:36.413782  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 18:51:36.413816  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:51:36.413853  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 18:51:36.413906  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 18:51:36.413944  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.413964  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem -> /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.413981  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.414774  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:51:36.439176  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:51:36.463796  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:51:36.490998  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 18:51:36.514746  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 18:51:36.538661  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 18:51:36.562630  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:51:36.586739  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 18:51:36.610889  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:51:36.634562  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 18:51:36.658286  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 18:51:36.681715  444547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:51:36.698451  444547 ssh_runner.go:195] Run: openssl version
	I0819 18:51:36.704220  444547 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0819 18:51:36.704339  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 18:51:36.715389  444547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.720025  444547 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.720080  444547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.720142  444547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.725901  444547 command_runner.go:130] > 51391683
	I0819 18:51:36.726015  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 18:51:36.736206  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 18:51:36.747737  444547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.752558  444547 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.752599  444547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.752642  444547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.758223  444547 command_runner.go:130] > 3ec20f2e
	I0819 18:51:36.758300  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:51:36.767946  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:51:36.779143  444547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.783850  444547 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.783902  444547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.783950  444547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.789800  444547 command_runner.go:130] > b5213941
	I0819 18:51:36.789894  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:51:36.799700  444547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:51:36.804144  444547 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:51:36.804180  444547 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0819 18:51:36.804188  444547 command_runner.go:130] > Device: 253,1	Inode: 4197398     Links: 1
	I0819 18:51:36.804194  444547 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 18:51:36.804201  444547 command_runner.go:130] > Access: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804206  444547 command_runner.go:130] > Modify: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804217  444547 command_runner.go:130] > Change: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804222  444547 command_runner.go:130] >  Birth: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804284  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 18:51:36.810230  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.810339  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 18:51:36.816159  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.816241  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 18:51:36.821909  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.822019  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 18:51:36.827758  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.827847  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 18:51:36.833329  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.833420  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 18:51:36.838995  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.839152  444547 kubeadm.go:392] StartCluster: {Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:51:36.839251  444547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:51:36.839310  444547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:51:36.874453  444547 command_runner.go:130] > e908c3e4c32bde71b6441e3ee7b44b5559f7c82f0a6a46e750222d6420ee5768
	I0819 18:51:36.874803  444547 command_runner.go:130] > 790104913a298ce37e7cba7691f8e4d617c4a8638cb821e244b70555efd66fbf
	I0819 18:51:36.874823  444547 command_runner.go:130] > aa84abbb4c9f6505401955b079afdb2f11c046f6879ee08a957007aaca875b03
	I0819 18:51:36.874834  444547 command_runner.go:130] > d8bfd36fbe96397e082982d91cabf0e8c731c94d88ed6c685f08a18b2da4cc6c
	I0819 18:51:36.874843  444547 command_runner.go:130] > e4040e2a8622453f1ef2fd04190fd96bea2fd3de919c6b06deaa38e83d38cf5b
	I0819 18:51:36.874899  444547 command_runner.go:130] > 8df7ac76fe134c848ddad74ca75f51e5c19afffdbd0712cb3d74333cdc6b91dc
	I0819 18:51:36.875009  444547 command_runner.go:130] > 94e0fe7cb19015bd08c7406824ba63263d67eec22f1240ef0f0eaff258976b4f
	I0819 18:51:36.875035  444547 command_runner.go:130] > 871ee5a26fc4d8fe6f04e66a22c78ba6e6b80077bd1066ab3a3c1acb639de113
	I0819 18:51:36.875045  444547 command_runner.go:130] > 70ce15fbc3bc32cd55767aab9cde75fc9d6f452d9c22c53034e5c2af14442b32
	I0819 18:51:36.875236  444547 command_runner.go:130] > 7703464bfd87d33565890b044096dcd7f50a8b1340ec99f2a88431946c69fc1b
	I0819 18:51:36.875268  444547 command_runner.go:130] > d65fc62d1475e4da38a8c95d6b13d520659291d23ac972a2d197d382a7fa6027
	I0819 18:51:36.875360  444547 command_runner.go:130] > d89c8be1ccfda0fbaa81ee519dc2e56af6349b6060b1afd9b3ac512310edc348
	I0819 18:51:36.875408  444547 command_runner.go:130] > e38111b143b726d6b06442db0379d3ecea609946581cb76904945ed68a359c74
	I0819 18:51:36.876958  444547 cri.go:89] found id: "e908c3e4c32bde71b6441e3ee7b44b5559f7c82f0a6a46e750222d6420ee5768"
	I0819 18:51:36.876978  444547 cri.go:89] found id: "790104913a298ce37e7cba7691f8e4d617c4a8638cb821e244b70555efd66fbf"
	I0819 18:51:36.876984  444547 cri.go:89] found id: "aa84abbb4c9f6505401955b079afdb2f11c046f6879ee08a957007aaca875b03"
	I0819 18:51:36.876989  444547 cri.go:89] found id: "d8bfd36fbe96397e082982d91cabf0e8c731c94d88ed6c685f08a18b2da4cc6c"
	I0819 18:51:36.876993  444547 cri.go:89] found id: "e4040e2a8622453f1ef2fd04190fd96bea2fd3de919c6b06deaa38e83d38cf5b"
	I0819 18:51:36.876998  444547 cri.go:89] found id: "8df7ac76fe134c848ddad74ca75f51e5c19afffdbd0712cb3d74333cdc6b91dc"
	I0819 18:51:36.877002  444547 cri.go:89] found id: "94e0fe7cb19015bd08c7406824ba63263d67eec22f1240ef0f0eaff258976b4f"
	I0819 18:51:36.877006  444547 cri.go:89] found id: "871ee5a26fc4d8fe6f04e66a22c78ba6e6b80077bd1066ab3a3c1acb639de113"
	I0819 18:51:36.877010  444547 cri.go:89] found id: "70ce15fbc3bc32cd55767aab9cde75fc9d6f452d9c22c53034e5c2af14442b32"
	I0819 18:51:36.877024  444547 cri.go:89] found id: "7703464bfd87d33565890b044096dcd7f50a8b1340ec99f2a88431946c69fc1b"
	I0819 18:51:36.877032  444547 cri.go:89] found id: "d65fc62d1475e4da38a8c95d6b13d520659291d23ac972a2d197d382a7fa6027"
	I0819 18:51:36.877036  444547 cri.go:89] found id: "d89c8be1ccfda0fbaa81ee519dc2e56af6349b6060b1afd9b3ac512310edc348"
	I0819 18:51:36.877040  444547 cri.go:89] found id: "e38111b143b726d6b06442db0379d3ecea609946581cb76904945ed68a359c74"
	I0819 18:51:36.877044  444547 cri.go:89] found id: ""
	I0819 18:51:36.877087  444547 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.122283674Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094230122261531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa2662c6-b685-46ea-abf6-7306e485f78a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.122889289Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6990fe88-a1cc-4cca-9613-1d8e85f28c6f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.122955469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6990fe88-a1cc-4cca-9613-1d8e85f28c6f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.123050338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329128126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restart
Count: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d,PodSandboxId:59013506b9174ea339d22d4e1595ce75d5914ac9c880e68a9b739018f586ec2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094163327692056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15de45e6effb382c12ca8494f33bff76,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 15,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca,PodSandboxId:ddca0e39cb48d9da271cf14439314e1118baec86bc841a867aeb3fe42df61ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724093988918818534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6990fe88-a1cc-4cca-9613-1d8e85f28c6f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.155267537Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e00fce3d-91f6-413e-bc53-faab60fef2e9 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.155367651Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e00fce3d-91f6-413e-bc53-faab60fef2e9 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.156961044Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c78eb2c1-209c-4e8f-a014-0fe297e8c0b7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.157333947Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094230157304974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c78eb2c1-209c-4e8f-a014-0fe297e8c0b7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.157893325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6e6829c-d1e5-4ef0-97b7-241dc3459d68 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.157985407Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6e6829c-d1e5-4ef0-97b7-241dc3459d68 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.158078357Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329128126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restart
Count: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d,PodSandboxId:59013506b9174ea339d22d4e1595ce75d5914ac9c880e68a9b739018f586ec2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094163327692056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15de45e6effb382c12ca8494f33bff76,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 15,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca,PodSandboxId:ddca0e39cb48d9da271cf14439314e1118baec86bc841a867aeb3fe42df61ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724093988918818534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6e6829c-d1e5-4ef0-97b7-241dc3459d68 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.194151900Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=854ae76a-a2af-4703-b7aa-0a4f04697372 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.194229291Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=854ae76a-a2af-4703-b7aa-0a4f04697372 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.195335170Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=926ed661-6beb-45b6-a67f-d33de3eb293e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.195893731Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094230195868654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=926ed661-6beb-45b6-a67f-d33de3eb293e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.196424260Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b548670-d9d0-4edb-8611-8433de2cb554 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.196511112Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b548670-d9d0-4edb-8611-8433de2cb554 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.196603157Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329128126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restart
Count: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d,PodSandboxId:59013506b9174ea339d22d4e1595ce75d5914ac9c880e68a9b739018f586ec2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094163327692056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15de45e6effb382c12ca8494f33bff76,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 15,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca,PodSandboxId:ddca0e39cb48d9da271cf14439314e1118baec86bc841a867aeb3fe42df61ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724093988918818534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b548670-d9d0-4edb-8611-8433de2cb554 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.227958082Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=18fcd38e-64bb-4d4a-980b-dc53e39dfeb4 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.228029866Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=18fcd38e-64bb-4d4a-980b-dc53e39dfeb4 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.228886429Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c5a54d9-5da0-4a99-b6b3-3320f9d3f8fa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.229662288Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094230229634690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c5a54d9-5da0-4a99-b6b3-3320f9d3f8fa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.230181440Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=454901e0-a993-41f5-adf5-1d3d70cbf2e4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.230450219Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=454901e0-a993-41f5-adf5-1d3d70cbf2e4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:50 functional-124593 crio[3397]: time="2024-08-19 19:03:50.230583451Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329128126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restart
Count: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d,PodSandboxId:59013506b9174ea339d22d4e1595ce75d5914ac9c880e68a9b739018f586ec2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094163327692056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15de45e6effb382c12ca8494f33bff76,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 15,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca,PodSandboxId:ddca0e39cb48d9da271cf14439314e1118baec86bc841a867aeb3fe42df61ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724093988918818534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=454901e0-a993-41f5-adf5-1d3d70cbf2e4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e764198234f75       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   About a minute ago   Exited              kube-controller-manager   15                  1b98c8cb37fd8       kube-controller-manager-functional-124593
	effebbec1cbf2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   About a minute ago   Exited              kube-apiserver            15                  59013506b9174       kube-apiserver-functional-124593
	e3ddc8f73f9e6       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   4 minutes ago        Running             kube-scheduler            4                   ddca0e39cb48d       kube-scheduler-functional-124593
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.066009] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.197712] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.124470] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.281624] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +4.011758] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +4.140421] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +0.057144] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.989777] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	[  +0.082461] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.724179] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +0.114451] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.497778] kauditd_printk_skb: 98 callbacks suppressed
	[Aug19 18:50] systemd-fstab-generator[3042]: Ignoring "noauto" option for root device
	[  +0.214760] systemd-fstab-generator[3119]: Ignoring "noauto" option for root device
	[  +0.240969] systemd-fstab-generator[3138]: Ignoring "noauto" option for root device
	[  +0.217290] systemd-fstab-generator[3183]: Ignoring "noauto" option for root device
	[  +0.371422] systemd-fstab-generator[3242]: Ignoring "noauto" option for root device
	[Aug19 18:51] systemd-fstab-generator[3507]: Ignoring "noauto" option for root device
	[  +0.085616] kauditd_printk_skb: 184 callbacks suppressed
	[  +1.984129] systemd-fstab-generator[3627]: Ignoring "noauto" option for root device
	[Aug19 18:52] kauditd_printk_skb: 81 callbacks suppressed
	[Aug19 18:55] systemd-fstab-generator[9158]: Ignoring "noauto" option for root device
	[Aug19 18:56] kauditd_printk_skb: 70 callbacks suppressed
	[Aug19 18:59] systemd-fstab-generator[10102]: Ignoring "noauto" option for root device
	[Aug19 19:00] kauditd_printk_skb: 54 callbacks suppressed
	
	
	==> kernel <==
	 19:03:50 up 14 min,  0 users,  load average: 0.02, 0.13, 0.09
	Linux functional-124593 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d] <==
	I0819 19:02:43.467753       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0819 19:02:43.743186       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:43.743280       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0819 19:02:43.751731       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0819 19:02:43.755112       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 19:02:43.758721       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 19:02:43.758852       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 19:02:43.759065       1 instance.go:232] Using reconciler: lease
	W0819 19:02:43.760135       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:44.743832       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:44.743963       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:44.761569       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:46.185098       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:46.207828       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:46.382784       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:48.822814       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:48.974921       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:49.351895       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:53.115051       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:53.237838       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:54.161281       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:59.099479       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:59.492816       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:03:01.794931       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0819 19:03:03.759664       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9] <==
	I0819 19:02:44.745523       1 serving.go:386] Generated self-signed cert in-memory
	I0819 19:02:44.990908       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 19:02:44.990991       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:02:44.992289       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 19:02:44.992410       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 19:02:44.992616       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 19:02:44.992692       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0819 19:03:04.995138       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.22:8441/healthz\": dial tcp 192.168.39.22:8441: connect: connection refused"
	
	
	==> kube-scheduler [e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca] <==
	E0819 19:03:04.765268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.22:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.39.22:38304->192.168.39.22:8441: read: connection reset by peer" logger="UnhandledError"
	W0819 19:03:04.765281       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.22:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.39.22:38332->192.168.39.22:8441: read: connection reset by peer
	E0819 19:03:04.765325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.22:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.39.22:38332->192.168.39.22:8441: read: connection reset by peer" logger="UnhandledError"
	W0819 19:03:04.765378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.22:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.39.22:38312->192.168.39.22:8441: read: connection reset by peer
	E0819 19:03:04.765406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.22:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.39.22:38312->192.168.39.22:8441: read: connection reset by peer" logger="UnhandledError"
	W0819 19:03:13.315031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.22:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:13.315077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.22:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:21.103592       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.22:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:21.103636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.22:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:21.688385       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.22:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:21.688430       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.22:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:25.611942       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.22:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:25.611985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.22:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:29.616112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.22:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:29.616172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.22:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:32.074182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.22:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:32.074280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.22:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:34.545647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.22:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:34.545700       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.22:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:46.993572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.22:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:46.993633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.22:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:49.036950       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.22:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:49.037018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.22:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:49.224105       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.22:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:49.224150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.22:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Aug 19 19:03:38 functional-124593 kubelet[10109]: E0819 19:03:38.320569   10109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-124593_kube-system(15de45e6effb382c12ca8494f33bff76)\"" pod="kube-system/kube-apiserver-functional-124593" podUID="15de45e6effb382c12ca8494f33bff76"
	Aug 19 19:03:38 functional-124593 kubelet[10109]: E0819 19:03:38.384299   10109 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094218384019399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:03:38 functional-124593 kubelet[10109]: E0819 19:03:38.384338   10109 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094218384019399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:03:38 functional-124593 kubelet[10109]: E0819 19:03:38.751115   10109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/default/events\": dial tcp 192.168.39.22:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-124593.17ed3659059f9af6  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-124593,UID:functional-124593,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-124593 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-124593,},FirstTimestamp:2024-08-19 18:59:48.327103222 +0000 UTC m=+0.349325093,LastTimestamp:2024-08-19 18:59:48.327103222 +0000 UTC m=+0.349325093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:fu
nctional-124593,}"
	Aug 19 19:03:39 functional-124593 kubelet[10109]: E0819 19:03:39.773367   10109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-124593?timeout=10s\": dial tcp 192.168.39.22:8441: connect: connection refused" interval="7s"
	Aug 19 19:03:41 functional-124593 kubelet[10109]: I0819 19:03:41.953583   10109 kubelet_node_status.go:72] "Attempting to register node" node="functional-124593"
	Aug 19 19:03:41 functional-124593 kubelet[10109]: E0819 19:03:41.954719   10109 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8441/api/v1/nodes\": dial tcp 192.168.39.22:8441: connect: connection refused" node="functional-124593"
	Aug 19 19:03:43 functional-124593 kubelet[10109]: E0819 19:03:43.326183   10109 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_etcd_etcd-functional-124593_kube-system_1d81c5d63cba07001a82e239314e39e2_1\" is already in use by 847f5422f7736630cbb85c950b0ce7fee365173fd629c7c35c4baca6131ab56d. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="fb4264c69f603d50f969b7ac2f0dad4c593bae0da887198d6e0d16aab460b73b"
	Aug 19 19:03:43 functional-124593 kubelet[10109]: E0819 19:03:43.326390   10109 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:etcd,Image:registry.k8s.io/etcd:3.5.15-0,Command:[etcd --advertise-client-urls=https://192.168.39.22:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/minikube/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://192.168.39.22:2380 --initial-cluster=functional-124593=https://192.168.39.22:2380 --key-file=/var/lib/minikube/certs/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.39.22:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.39.22:2380 --name=functional-124593 --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/var/lib/minikube/certs/etcd/peer.key --peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.
crt --proxy-refresh-interval=70000 --snapshot-count=10000 --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{104857600 0} {<nil>} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etcd-data,ReadOnly:false,MountPath:/var/lib/minikube/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-certs,ReadOnly:false,MountPath:/var/lib/minikube/certs/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{P
robeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-functional-124593_kube-system(1d81c5d63cba07001a82e239314e39e2): CreateContainerError: the c
ontainer name \"k8s_etcd_etcd-functional-124593_kube-system_1d81c5d63cba07001a82e239314e39e2_1\" is already in use by 847f5422f7736630cbb85c950b0ce7fee365173fd629c7c35c4baca6131ab56d. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Aug 19 19:03:43 functional-124593 kubelet[10109]: E0819 19:03:43.327654   10109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-functional-124593_kube-system_1d81c5d63cba07001a82e239314e39e2_1\\\" is already in use by 847f5422f7736630cbb85c950b0ce7fee365173fd629c7c35c4baca6131ab56d. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-functional-124593" podUID="1d81c5d63cba07001a82e239314e39e2"
	Aug 19 19:03:45 functional-124593 kubelet[10109]: W0819 19:03:45.673444   10109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	Aug 19 19:03:45 functional-124593 kubelet[10109]: E0819 19:03:45.673883   10109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	Aug 19 19:03:46 functional-124593 kubelet[10109]: E0819 19:03:46.774948   10109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-124593?timeout=10s\": dial tcp 192.168.39.22:8441: connect: connection refused" interval="7s"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: I0819 19:03:48.320002   10109 scope.go:117] "RemoveContainer" containerID="e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.320617   10109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-124593_kube-system(c71ff42fdd5902541920b0f91ca1cbbc)\"" pod="kube-system/kube-controller-manager-functional-124593" podUID="c71ff42fdd5902541920b0f91ca1cbbc"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.331567   10109 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 19:03:48 functional-124593 kubelet[10109]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 19:03:48 functional-124593 kubelet[10109]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 19:03:48 functional-124593 kubelet[10109]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 19:03:48 functional-124593 kubelet[10109]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.385433   10109 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094228385106955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.385459   10109 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094228385106955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.752200   10109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/default/events\": dial tcp 192.168.39.22:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-124593.17ed3659059f9af6  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-124593,UID:functional-124593,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-124593 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-124593,},FirstTimestamp:2024-08-19 18:59:48.327103222 +0000 UTC m=+0.349325093,LastTimestamp:2024-08-19 18:59:48.327103222 +0000 UTC m=+0.349325093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:fu
nctional-124593,}"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: I0819 19:03:48.956588   10109 kubelet_node_status.go:72] "Attempting to register node" node="functional-124593"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.957346   10109 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8441/api/v1/nodes\": dial tcp 192.168.39.22:8441: connect: connection refused" node="functional-124593"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:03:49.872335  447700 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-430949/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-124593 -n functional-124593
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-124593 -n functional-124593: exit status 2 (232.217844ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-124593" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (834.14s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-124593 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-124593 get po -A: exit status 1 (57.621215ms)

                                                
                                                
** stderr ** 
	E0819 19:03:50.987704  447781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.22:8441/api?timeout=32s\": dial tcp 192.168.39.22:8441: connect: connection refused"
	E0819 19:03:50.989446  447781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.22:8441/api?timeout=32s\": dial tcp 192.168.39.22:8441: connect: connection refused"
	E0819 19:03:50.991170  447781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.22:8441/api?timeout=32s\": dial tcp 192.168.39.22:8441: connect: connection refused"
	E0819 19:03:50.992790  447781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.22:8441/api?timeout=32s\": dial tcp 192.168.39.22:8441: connect: connection refused"
	E0819 19:03:50.994353  447781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.22:8441/api?timeout=32s\": dial tcp 192.168.39.22:8441: connect: connection refused"
	The connection to the server 192.168.39.22:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-124593 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"E0819 19:03:50.987704  447781 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.22:8441/api?timeout=32s\\\": dial tcp 192.168.39.22:8441: connect: connection refused\"\nE0819 19:03:50.989446  447781 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.22:8441/api?timeout=32s\\\": dial tcp 192.168.39.22:8441: connect: connection refused\"\nE0819 19:03:50.991170  447781 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.22:8441/api?timeout=32s\\\": dial tcp 192.168.39.22:8441: connect: connection refused\"\nE0819 19:03:50.992790  447781 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.22:8441/api?timeout=32s\\\": dial tcp 192.168.39.22:8441: connect: connection refused\"\nE0819 19:03:50.994353  447
781 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.22:8441/api?timeout=32s\\\": dial tcp 192.168.39.22:8441: connect: connection refused\"\nThe connection to the server 192.168.39.22:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-124593 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-124593 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-124593 -n functional-124593
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-124593 -n functional-124593: exit status 2 (219.107181ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 logs -n 25
helpers_test.go:252: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ip      | addons-966657 ip               | addons-966657     | jenkins | v1.33.1 | 19 Aug 24 18:42 UTC | 19 Aug 24 18:42 UTC |
	| addons  | addons-966657 addons disable   | addons-966657     | jenkins | v1.33.1 | 19 Aug 24 18:42 UTC | 19 Aug 24 18:42 UTC |
	|         | ingress-dns --alsologtostderr  |                   |         |         |                     |                     |
	|         | -v=1                           |                   |         |         |                     |                     |
	| addons  | addons-966657 addons disable   | addons-966657     | jenkins | v1.33.1 | 19 Aug 24 18:42 UTC | 19 Aug 24 18:43 UTC |
	|         | ingress --alsologtostderr -v=1 |                   |         |         |                     |                     |
	| addons  | addons-966657 addons           | addons-966657     | jenkins | v1.33.1 | 19 Aug 24 18:45 UTC | 19 Aug 24 18:45 UTC |
	|         | disable metrics-server         |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                   |         |         |                     |                     |
	| stop    | -p addons-966657               | addons-966657     | jenkins | v1.33.1 | 19 Aug 24 18:45 UTC |                     |
	| addons  | enable dashboard -p            | addons-966657     | jenkins | v1.33.1 | 19 Aug 24 18:47 UTC |                     |
	|         | addons-966657                  |                   |         |         |                     |                     |
	| addons  | disable dashboard -p           | addons-966657     | jenkins | v1.33.1 | 19 Aug 24 18:47 UTC |                     |
	|         | addons-966657                  |                   |         |         |                     |                     |
	| addons  | disable gvisor -p              | addons-966657     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC |                     |
	|         | addons-966657                  |                   |         |         |                     |                     |
	| delete  | -p addons-966657               | addons-966657     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	| start   | -p nospam-212543 -n=1          | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | --memory=2250 --wait=false     |                   |         |         |                     |                     |
	|         | --log_dir=/tmp/nospam-212543   |                   |         |         |                     |                     |
	|         | --driver=kvm2                  |                   |         |         |                     |                     |
	|         | --container-runtime=crio       |                   |         |         |                     |                     |
	| start   | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC |                     |
	|         | /tmp/nospam-212543 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| start   | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC |                     |
	|         | /tmp/nospam-212543 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| start   | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC |                     |
	|         | /tmp/nospam-212543 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| pause   | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 pause       |                   |         |         |                     |                     |
	| pause   | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 pause       |                   |         |         |                     |                     |
	| pause   | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 pause       |                   |         |         |                     |                     |
	| unpause | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 unpause     |                   |         |         |                     |                     |
	| unpause | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 unpause     |                   |         |         |                     |                     |
	| unpause | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 unpause     |                   |         |         |                     |                     |
	| stop    | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 stop        |                   |         |         |                     |                     |
	| stop    | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:49 UTC |
	|         | /tmp/nospam-212543 stop        |                   |         |         |                     |                     |
	| stop    | nospam-212543 --log_dir        | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:49 UTC | 19 Aug 24 18:49 UTC |
	|         | /tmp/nospam-212543 stop        |                   |         |         |                     |                     |
	| delete  | -p nospam-212543               | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:49 UTC | 19 Aug 24 18:49 UTC |
	| start   | -p functional-124593           | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 18:49 UTC | 19 Aug 24 18:49 UTC |
	|         | --memory=4000                  |                   |         |         |                     |                     |
	|         | --apiserver-port=8441          |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                   |         |         |                     |                     |
	|         | --container-runtime=crio       |                   |         |         |                     |                     |
	| start   | -p functional-124593           | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 18:49 UTC |                     |
	|         | --alsologtostderr -v=8         |                   |         |         |                     |                     |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:49:56
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:49:56.790328  444547 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:49:56.790453  444547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:49:56.790459  444547 out.go:358] Setting ErrFile to fd 2...
	I0819 18:49:56.790463  444547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:49:56.790638  444547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 18:49:56.791174  444547 out.go:352] Setting JSON to false
	I0819 18:49:56.792114  444547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9148,"bootTime":1724084249,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:49:56.792181  444547 start.go:139] virtualization: kvm guest
	I0819 18:49:56.794648  444547 out.go:177] * [functional-124593] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:49:56.796256  444547 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 18:49:56.796302  444547 notify.go:220] Checking for updates...
	I0819 18:49:56.799145  444547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:49:56.800604  444547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 18:49:56.802061  444547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 18:49:56.803353  444547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:49:56.804793  444547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:49:56.806582  444547 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:49:56.806680  444547 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 18:49:56.807152  444547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:49:56.807235  444547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:49:56.823439  444547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0819 18:49:56.823898  444547 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:49:56.824445  444547 main.go:141] libmachine: Using API Version  1
	I0819 18:49:56.824484  444547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:49:56.824923  444547 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:49:56.825223  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:49:56.864107  444547 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 18:49:56.865533  444547 start.go:297] selected driver: kvm2
	I0819 18:49:56.865559  444547 start.go:901] validating driver "kvm2" against &{Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:49:56.865676  444547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:49:56.866051  444547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:49:56.866145  444547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:49:56.882415  444547 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:49:56.883177  444547 cni.go:84] Creating CNI manager for ""
	I0819 18:49:56.883193  444547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:49:56.883244  444547 start.go:340] cluster config:
	{Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-124593 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:49:56.883396  444547 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:49:56.885199  444547 out.go:177] * Starting "functional-124593" primary control-plane node in "functional-124593" cluster
	I0819 18:49:56.886649  444547 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:49:56.886699  444547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:49:56.886708  444547 cache.go:56] Caching tarball of preloaded images
	I0819 18:49:56.886828  444547 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:49:56.886844  444547 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:49:56.886977  444547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/config.json ...
	I0819 18:49:56.887255  444547 start.go:360] acquireMachinesLock for functional-124593: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:49:56.887316  444547 start.go:364] duration metric: took 31.483µs to acquireMachinesLock for "functional-124593"
	I0819 18:49:56.887333  444547 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:49:56.887345  444547 fix.go:54] fixHost starting: 
	I0819 18:49:56.887711  444547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:49:56.887765  444547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:49:56.903210  444547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43899
	I0819 18:49:56.903686  444547 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:49:56.904263  444547 main.go:141] libmachine: Using API Version  1
	I0819 18:49:56.904298  444547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:49:56.904680  444547 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:49:56.904935  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:49:56.905158  444547 main.go:141] libmachine: (functional-124593) Calling .GetState
	I0819 18:49:56.906833  444547 fix.go:112] recreateIfNeeded on functional-124593: state=Running err=<nil>
	W0819 18:49:56.906856  444547 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:49:56.908782  444547 out.go:177] * Updating the running kvm2 "functional-124593" VM ...
	I0819 18:49:56.910443  444547 machine.go:93] provisionDockerMachine start ...
	I0819 18:49:56.910478  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:49:56.910823  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:56.913259  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:56.913615  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:56.913638  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:56.913753  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:56.914043  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:56.914207  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:56.914341  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:56.914485  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:56.914684  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:56.914697  444547 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:49:57.017550  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-124593
	
	I0819 18:49:57.017585  444547 main.go:141] libmachine: (functional-124593) Calling .GetMachineName
	I0819 18:49:57.017923  444547 buildroot.go:166] provisioning hostname "functional-124593"
	I0819 18:49:57.017956  444547 main.go:141] libmachine: (functional-124593) Calling .GetMachineName
	I0819 18:49:57.018164  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.021185  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.021551  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.021598  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.021780  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.022011  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.022177  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.022309  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.022452  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:57.022654  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:57.022668  444547 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-124593 && echo "functional-124593" | sudo tee /etc/hostname
	I0819 18:49:57.141478  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-124593
	
	I0819 18:49:57.141514  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.144157  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.144414  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.144449  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.144722  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.144969  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.145192  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.145388  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.145570  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:57.145756  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:57.145776  444547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-124593' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-124593/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-124593' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:49:57.249989  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:49:57.250034  444547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 18:49:57.250086  444547 buildroot.go:174] setting up certificates
	I0819 18:49:57.250099  444547 provision.go:84] configureAuth start
	I0819 18:49:57.250118  444547 main.go:141] libmachine: (functional-124593) Calling .GetMachineName
	I0819 18:49:57.250442  444547 main.go:141] libmachine: (functional-124593) Calling .GetIP
	I0819 18:49:57.253181  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.253490  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.253519  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.253712  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.256213  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.256541  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.256586  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.256752  444547 provision.go:143] copyHostCerts
	I0819 18:49:57.256784  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 18:49:57.256824  444547 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 18:49:57.256848  444547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 18:49:57.256918  444547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 18:49:57.257021  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 18:49:57.257043  444547 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 18:49:57.257048  444547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 18:49:57.257071  444547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 18:49:57.257122  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 18:49:57.257160  444547 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 18:49:57.257176  444547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 18:49:57.257198  444547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 18:49:57.257249  444547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.functional-124593 san=[127.0.0.1 192.168.39.22 functional-124593 localhost minikube]
	I0819 18:49:57.505075  444547 provision.go:177] copyRemoteCerts
	I0819 18:49:57.505163  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:49:57.505194  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.508248  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.508654  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.508690  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.508942  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.509160  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.509381  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.509556  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:49:57.591978  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 18:49:57.592075  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 18:49:57.620626  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 18:49:57.620699  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 18:49:57.646085  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 18:49:57.646168  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 18:49:57.671918  444547 provision.go:87] duration metric: took 421.80001ms to configureAuth
	I0819 18:49:57.671954  444547 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:49:57.672176  444547 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:49:57.672267  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.675054  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.675420  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.675456  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.675667  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.675902  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.676057  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.676211  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.676410  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:57.676596  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:57.676611  444547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:50:03.241286  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:50:03.241321  444547 machine.go:96] duration metric: took 6.330855619s to provisionDockerMachine
	I0819 18:50:03.241334  444547 start.go:293] postStartSetup for "functional-124593" (driver="kvm2")
	I0819 18:50:03.241346  444547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:50:03.241368  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.241892  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:50:03.241919  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.244822  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.245262  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.245291  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.245469  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.245716  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.245889  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.246048  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:50:03.327892  444547 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:50:03.332233  444547 command_runner.go:130] > NAME=Buildroot
	I0819 18:50:03.332262  444547 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 18:50:03.332268  444547 command_runner.go:130] > ID=buildroot
	I0819 18:50:03.332276  444547 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 18:50:03.332284  444547 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 18:50:03.332381  444547 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:50:03.332400  444547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 18:50:03.332476  444547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 18:50:03.332579  444547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 18:50:03.332593  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /etc/ssl/certs/4381592.pem
	I0819 18:50:03.332685  444547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/test/nested/copy/438159/hosts -> hosts in /etc/test/nested/copy/438159
	I0819 18:50:03.332692  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/test/nested/copy/438159/hosts -> /etc/test/nested/copy/438159/hosts
	I0819 18:50:03.332732  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/438159
	I0819 18:50:03.343618  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 18:50:03.367775  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/test/nested/copy/438159/hosts --> /etc/test/nested/copy/438159/hosts (40 bytes)
	I0819 18:50:03.392035  444547 start.go:296] duration metric: took 150.684705ms for postStartSetup
	I0819 18:50:03.392093  444547 fix.go:56] duration metric: took 6.504748451s for fixHost
	I0819 18:50:03.392120  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.394902  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.395203  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.395231  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.395450  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.395682  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.395876  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.396030  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.396215  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:50:03.396420  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:50:03.396434  444547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:50:03.498031  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724093403.488650243
	
	I0819 18:50:03.498062  444547 fix.go:216] guest clock: 1724093403.488650243
	I0819 18:50:03.498069  444547 fix.go:229] Guest: 2024-08-19 18:50:03.488650243 +0000 UTC Remote: 2024-08-19 18:50:03.392098301 +0000 UTC m=+6.637869514 (delta=96.551942ms)
	I0819 18:50:03.498115  444547 fix.go:200] guest clock delta is within tolerance: 96.551942ms
	I0819 18:50:03.498121  444547 start.go:83] releasing machines lock for "functional-124593", held for 6.610795712s
	I0819 18:50:03.498146  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.498456  444547 main.go:141] libmachine: (functional-124593) Calling .GetIP
	I0819 18:50:03.501197  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.501685  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.501717  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.501963  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.502567  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.502825  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.502931  444547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:50:03.502977  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.503104  444547 ssh_runner.go:195] Run: cat /version.json
	I0819 18:50:03.503130  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.505641  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.505904  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.505942  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.505982  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.506089  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.506248  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.506286  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.506326  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.506510  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.506529  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.506705  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.506709  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:50:03.506856  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.507023  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:50:03.596444  444547 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 18:50:03.596676  444547 ssh_runner.go:195] Run: systemctl --version
	I0819 18:50:03.642156  444547 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 18:50:03.642205  444547 command_runner.go:130] > systemd 252 (252)
	I0819 18:50:03.642223  444547 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 18:50:03.642284  444547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:50:04.032467  444547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 18:50:04.057730  444547 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 18:50:04.057919  444547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:50:04.058009  444547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:50:04.094792  444547 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 18:50:04.094824  444547 start.go:495] detecting cgroup driver to use...
	I0819 18:50:04.094892  444547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:50:04.216404  444547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:50:04.250117  444547 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:50:04.250182  444547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:50:04.298450  444547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:50:04.329276  444547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:50:04.576464  444547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:50:04.796403  444547 docker.go:233] disabling docker service ...
	I0819 18:50:04.796509  444547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:50:04.824051  444547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:50:04.841929  444547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:50:05.032450  444547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:50:05.230662  444547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:50:05.261270  444547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:50:05.307751  444547 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0819 18:50:05.308002  444547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:50:05.308071  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.325985  444547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:50:05.326072  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.340857  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.355923  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.368797  444547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:50:05.384107  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.396132  444547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.407497  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.421137  444547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:50:05.431493  444547 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 18:50:05.431832  444547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:50:05.444023  444547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:50:05.610160  444547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:51:35.953940  444547 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.343723561s)
	I0819 18:51:35.953984  444547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:51:35.954042  444547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:51:35.958905  444547 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0819 18:51:35.958943  444547 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0819 18:51:35.958954  444547 command_runner.go:130] > Device: 0,22	Inode: 1653        Links: 1
	I0819 18:51:35.958965  444547 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 18:51:35.958973  444547 command_runner.go:130] > Access: 2024-08-19 18:51:35.749660900 +0000
	I0819 18:51:35.958982  444547 command_runner.go:130] > Modify: 2024-08-19 18:51:35.749660900 +0000
	I0819 18:51:35.958993  444547 command_runner.go:130] > Change: 2024-08-19 18:51:35.749660900 +0000
	I0819 18:51:35.958999  444547 command_runner.go:130] >  Birth: -
	I0819 18:51:35.959026  444547 start.go:563] Will wait 60s for crictl version
	I0819 18:51:35.959080  444547 ssh_runner.go:195] Run: which crictl
	I0819 18:51:35.962908  444547 command_runner.go:130] > /usr/bin/crictl
	I0819 18:51:35.963010  444547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:51:35.995379  444547 command_runner.go:130] > Version:  0.1.0
	I0819 18:51:35.995417  444547 command_runner.go:130] > RuntimeName:  cri-o
	I0819 18:51:35.995425  444547 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0819 18:51:35.995433  444547 command_runner.go:130] > RuntimeApiVersion:  v1
	I0819 18:51:35.996527  444547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:51:35.996626  444547 ssh_runner.go:195] Run: crio --version
	I0819 18:51:36.025037  444547 command_runner.go:130] > crio version 1.29.1
	I0819 18:51:36.025067  444547 command_runner.go:130] > Version:        1.29.1
	I0819 18:51:36.025076  444547 command_runner.go:130] > GitCommit:      unknown
	I0819 18:51:36.025082  444547 command_runner.go:130] > GitCommitDate:  unknown
	I0819 18:51:36.025088  444547 command_runner.go:130] > GitTreeState:   clean
	I0819 18:51:36.025097  444547 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 18:51:36.025103  444547 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 18:51:36.025108  444547 command_runner.go:130] > Compiler:       gc
	I0819 18:51:36.025115  444547 command_runner.go:130] > Platform:       linux/amd64
	I0819 18:51:36.025122  444547 command_runner.go:130] > Linkmode:       dynamic
	I0819 18:51:36.025137  444547 command_runner.go:130] > BuildTags:      
	I0819 18:51:36.025142  444547 command_runner.go:130] >   containers_image_ostree_stub
	I0819 18:51:36.025147  444547 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 18:51:36.025151  444547 command_runner.go:130] >   btrfs_noversion
	I0819 18:51:36.025156  444547 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 18:51:36.025161  444547 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 18:51:36.025169  444547 command_runner.go:130] >   seccomp
	I0819 18:51:36.025175  444547 command_runner.go:130] > LDFlags:          unknown
	I0819 18:51:36.025182  444547 command_runner.go:130] > SeccompEnabled:   true
	I0819 18:51:36.025187  444547 command_runner.go:130] > AppArmorEnabled:  false
	I0819 18:51:36.025256  444547 ssh_runner.go:195] Run: crio --version
	I0819 18:51:36.052216  444547 command_runner.go:130] > crio version 1.29.1
	I0819 18:51:36.052240  444547 command_runner.go:130] > Version:        1.29.1
	I0819 18:51:36.052247  444547 command_runner.go:130] > GitCommit:      unknown
	I0819 18:51:36.052252  444547 command_runner.go:130] > GitCommitDate:  unknown
	I0819 18:51:36.052256  444547 command_runner.go:130] > GitTreeState:   clean
	I0819 18:51:36.052261  444547 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 18:51:36.052266  444547 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 18:51:36.052270  444547 command_runner.go:130] > Compiler:       gc
	I0819 18:51:36.052282  444547 command_runner.go:130] > Platform:       linux/amd64
	I0819 18:51:36.052288  444547 command_runner.go:130] > Linkmode:       dynamic
	I0819 18:51:36.052294  444547 command_runner.go:130] > BuildTags:      
	I0819 18:51:36.052301  444547 command_runner.go:130] >   containers_image_ostree_stub
	I0819 18:51:36.052307  444547 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 18:51:36.052317  444547 command_runner.go:130] >   btrfs_noversion
	I0819 18:51:36.052324  444547 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 18:51:36.052333  444547 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 18:51:36.052338  444547 command_runner.go:130] >   seccomp
	I0819 18:51:36.052345  444547 command_runner.go:130] > LDFlags:          unknown
	I0819 18:51:36.052350  444547 command_runner.go:130] > SeccompEnabled:   true
	I0819 18:51:36.052356  444547 command_runner.go:130] > AppArmorEnabled:  false
	I0819 18:51:36.055292  444547 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:51:36.056598  444547 main.go:141] libmachine: (functional-124593) Calling .GetIP
	I0819 18:51:36.059532  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:51:36.059864  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:51:36.059895  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:51:36.060137  444547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:51:36.064416  444547 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0819 18:51:36.064570  444547 kubeadm.go:883] updating cluster {Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:51:36.064698  444547 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:51:36.064782  444547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:51:36.110239  444547 command_runner.go:130] > {
	I0819 18:51:36.110264  444547 command_runner.go:130] >   "images": [
	I0819 18:51:36.110268  444547 command_runner.go:130] >     {
	I0819 18:51:36.110277  444547 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 18:51:36.110281  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110287  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 18:51:36.110290  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110294  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110303  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 18:51:36.110310  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 18:51:36.110314  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110319  444547 command_runner.go:130] >       "size": "87165492",
	I0819 18:51:36.110324  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.110330  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110343  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110350  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110359  444547 command_runner.go:130] >     },
	I0819 18:51:36.110364  444547 command_runner.go:130] >     {
	I0819 18:51:36.110373  444547 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 18:51:36.110391  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110399  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 18:51:36.110402  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110406  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110414  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 18:51:36.110425  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 18:51:36.110432  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110443  444547 command_runner.go:130] >       "size": "31470524",
	I0819 18:51:36.110453  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.110461  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110468  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110477  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110483  444547 command_runner.go:130] >     },
	I0819 18:51:36.110502  444547 command_runner.go:130] >     {
	I0819 18:51:36.110513  444547 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 18:51:36.110522  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110533  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 18:51:36.110539  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110549  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110563  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 18:51:36.110577  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 18:51:36.110586  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110594  444547 command_runner.go:130] >       "size": "61245718",
	I0819 18:51:36.110601  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.110611  444547 command_runner.go:130] >       "username": "nonroot",
	I0819 18:51:36.110621  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110631  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110637  444547 command_runner.go:130] >     },
	I0819 18:51:36.110645  444547 command_runner.go:130] >     {
	I0819 18:51:36.110658  444547 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 18:51:36.110668  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110677  444547 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 18:51:36.110684  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110701  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110715  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 18:51:36.110733  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 18:51:36.110742  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110753  444547 command_runner.go:130] >       "size": "149009664",
	I0819 18:51:36.110760  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.110764  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.110770  444547 command_runner.go:130] >       },
	I0819 18:51:36.110777  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110787  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110797  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110805  444547 command_runner.go:130] >     },
	I0819 18:51:36.110814  444547 command_runner.go:130] >     {
	I0819 18:51:36.110823  444547 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 18:51:36.110832  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110842  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 18:51:36.110849  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110853  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110868  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 18:51:36.110884  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 18:51:36.110893  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110901  444547 command_runner.go:130] >       "size": "95233506",
	I0819 18:51:36.110909  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.110918  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.110927  444547 command_runner.go:130] >       },
	I0819 18:51:36.110934  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110939  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110947  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110956  444547 command_runner.go:130] >     },
	I0819 18:51:36.110965  444547 command_runner.go:130] >     {
	I0819 18:51:36.110978  444547 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 18:51:36.110988  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110999  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 18:51:36.111007  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111013  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111025  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 18:51:36.111040  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 18:51:36.111049  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111060  444547 command_runner.go:130] >       "size": "89437512",
	I0819 18:51:36.111070  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.111080  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.111089  444547 command_runner.go:130] >       },
	I0819 18:51:36.111096  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111104  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111114  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.111122  444547 command_runner.go:130] >     },
	I0819 18:51:36.111128  444547 command_runner.go:130] >     {
	I0819 18:51:36.111140  444547 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 18:51:36.111148  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.111154  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 18:51:36.111163  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111170  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111185  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 18:51:36.111199  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 18:51:36.111206  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111213  444547 command_runner.go:130] >       "size": "92728217",
	I0819 18:51:36.111223  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.111230  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111239  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111246  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.111254  444547 command_runner.go:130] >     },
	I0819 18:51:36.111267  444547 command_runner.go:130] >     {
	I0819 18:51:36.111281  444547 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 18:51:36.111290  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.111299  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 18:51:36.111307  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111313  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111333  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 18:51:36.111345  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 18:51:36.111351  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111355  444547 command_runner.go:130] >       "size": "68420936",
	I0819 18:51:36.111361  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.111365  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.111370  444547 command_runner.go:130] >       },
	I0819 18:51:36.111374  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111381  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111385  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.111389  444547 command_runner.go:130] >     },
	I0819 18:51:36.111393  444547 command_runner.go:130] >     {
	I0819 18:51:36.111399  444547 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 18:51:36.111405  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.111410  444547 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 18:51:36.111415  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111420  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111429  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 18:51:36.111438  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 18:51:36.111442  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111448  444547 command_runner.go:130] >       "size": "742080",
	I0819 18:51:36.111452  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.111456  444547 command_runner.go:130] >         "value": "65535"
	I0819 18:51:36.111460  444547 command_runner.go:130] >       },
	I0819 18:51:36.111464  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111480  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111486  444547 command_runner.go:130] >       "pinned": true
	I0819 18:51:36.111494  444547 command_runner.go:130] >     }
	I0819 18:51:36.111502  444547 command_runner.go:130] >   ]
	I0819 18:51:36.111507  444547 command_runner.go:130] > }
	I0819 18:51:36.111701  444547 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:51:36.111714  444547 crio.go:433] Images already preloaded, skipping extraction
	I0819 18:51:36.111767  444547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:51:36.143806  444547 command_runner.go:130] > {
	I0819 18:51:36.143831  444547 command_runner.go:130] >   "images": [
	I0819 18:51:36.143835  444547 command_runner.go:130] >     {
	I0819 18:51:36.143843  444547 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 18:51:36.143848  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.143854  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 18:51:36.143857  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143861  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.143870  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 18:51:36.143877  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 18:51:36.143883  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143887  444547 command_runner.go:130] >       "size": "87165492",
	I0819 18:51:36.143891  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.143898  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.143904  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.143909  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.143912  444547 command_runner.go:130] >     },
	I0819 18:51:36.143916  444547 command_runner.go:130] >     {
	I0819 18:51:36.143922  444547 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 18:51:36.143929  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.143934  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 18:51:36.143939  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143943  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.143953  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 18:51:36.143960  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 18:51:36.143967  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143978  444547 command_runner.go:130] >       "size": "31470524",
	I0819 18:51:36.143984  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.143992  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144001  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144007  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144016  444547 command_runner.go:130] >     },
	I0819 18:51:36.144021  444547 command_runner.go:130] >     {
	I0819 18:51:36.144036  444547 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 18:51:36.144043  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144048  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 18:51:36.144054  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144058  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144067  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 18:51:36.144085  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 18:51:36.144093  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144100  444547 command_runner.go:130] >       "size": "61245718",
	I0819 18:51:36.144109  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.144119  444547 command_runner.go:130] >       "username": "nonroot",
	I0819 18:51:36.144126  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144134  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144138  444547 command_runner.go:130] >     },
	I0819 18:51:36.144142  444547 command_runner.go:130] >     {
	I0819 18:51:36.144148  444547 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 18:51:36.144154  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144159  444547 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 18:51:36.144162  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144165  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144172  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 18:51:36.144188  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 18:51:36.144197  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144204  444547 command_runner.go:130] >       "size": "149009664",
	I0819 18:51:36.144213  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144220  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144227  444547 command_runner.go:130] >       },
	I0819 18:51:36.144231  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144237  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144243  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144249  444547 command_runner.go:130] >     },
	I0819 18:51:36.144252  444547 command_runner.go:130] >     {
	I0819 18:51:36.144259  444547 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 18:51:36.144267  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144276  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 18:51:36.144285  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144291  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144305  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 18:51:36.144320  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 18:51:36.144327  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144333  444547 command_runner.go:130] >       "size": "95233506",
	I0819 18:51:36.144337  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144341  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144347  444547 command_runner.go:130] >       },
	I0819 18:51:36.144352  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144358  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144365  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144374  444547 command_runner.go:130] >     },
	I0819 18:51:36.144380  444547 command_runner.go:130] >     {
	I0819 18:51:36.144389  444547 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 18:51:36.144399  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144408  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 18:51:36.144419  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144427  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144435  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 18:51:36.144449  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 18:51:36.144471  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144501  444547 command_runner.go:130] >       "size": "89437512",
	I0819 18:51:36.144507  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144516  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144521  444547 command_runner.go:130] >       },
	I0819 18:51:36.144526  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144532  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144541  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144547  444547 command_runner.go:130] >     },
	I0819 18:51:36.144558  444547 command_runner.go:130] >     {
	I0819 18:51:36.144568  444547 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 18:51:36.144577  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144585  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 18:51:36.144593  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144600  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144611  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 18:51:36.144623  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 18:51:36.144632  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144640  444547 command_runner.go:130] >       "size": "92728217",
	I0819 18:51:36.144649  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.144656  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144663  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144669  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144677  444547 command_runner.go:130] >     },
	I0819 18:51:36.144682  444547 command_runner.go:130] >     {
	I0819 18:51:36.144694  444547 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 18:51:36.144704  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144716  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 18:51:36.144725  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144734  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144755  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 18:51:36.144768  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 18:51:36.144775  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144780  444547 command_runner.go:130] >       "size": "68420936",
	I0819 18:51:36.144789  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144798  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144807  444547 command_runner.go:130] >       },
	I0819 18:51:36.144816  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144826  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144835  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144843  444547 command_runner.go:130] >     },
	I0819 18:51:36.144849  444547 command_runner.go:130] >     {
	I0819 18:51:36.144864  444547 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 18:51:36.144873  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144882  444547 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 18:51:36.144892  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144901  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144912  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 18:51:36.144926  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 18:51:36.144934  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144940  444547 command_runner.go:130] >       "size": "742080",
	I0819 18:51:36.144944  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144950  444547 command_runner.go:130] >         "value": "65535"
	I0819 18:51:36.144958  444547 command_runner.go:130] >       },
	I0819 18:51:36.144968  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144979  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144988  444547 command_runner.go:130] >       "pinned": true
	I0819 18:51:36.144995  444547 command_runner.go:130] >     }
	I0819 18:51:36.145001  444547 command_runner.go:130] >   ]
	I0819 18:51:36.145008  444547 command_runner.go:130] > }
	I0819 18:51:36.145182  444547 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:51:36.145198  444547 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:51:36.145207  444547 kubeadm.go:934] updating node { 192.168.39.22 8441 v1.31.0 crio true true} ...
	I0819 18:51:36.145347  444547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-124593 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:51:36.145440  444547 ssh_runner.go:195] Run: crio config
	I0819 18:51:36.185689  444547 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0819 18:51:36.185722  444547 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0819 18:51:36.185733  444547 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0819 18:51:36.185738  444547 command_runner.go:130] > #
	I0819 18:51:36.185763  444547 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0819 18:51:36.185772  444547 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0819 18:51:36.185782  444547 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0819 18:51:36.185794  444547 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0819 18:51:36.185800  444547 command_runner.go:130] > # reload'.
	I0819 18:51:36.185810  444547 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0819 18:51:36.185824  444547 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0819 18:51:36.185834  444547 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0819 18:51:36.185851  444547 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0819 18:51:36.185857  444547 command_runner.go:130] > [crio]
	I0819 18:51:36.185867  444547 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0819 18:51:36.185878  444547 command_runner.go:130] > # containers images, in this directory.
	I0819 18:51:36.185886  444547 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0819 18:51:36.185906  444547 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0819 18:51:36.185916  444547 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0819 18:51:36.185927  444547 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0819 18:51:36.185937  444547 command_runner.go:130] > # imagestore = ""
	I0819 18:51:36.185947  444547 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0819 18:51:36.185960  444547 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0819 18:51:36.185968  444547 command_runner.go:130] > storage_driver = "overlay"
	I0819 18:51:36.185979  444547 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0819 18:51:36.185990  444547 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0819 18:51:36.186001  444547 command_runner.go:130] > storage_option = [
	I0819 18:51:36.186010  444547 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0819 18:51:36.186018  444547 command_runner.go:130] > ]
	I0819 18:51:36.186029  444547 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0819 18:51:36.186041  444547 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0819 18:51:36.186052  444547 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0819 18:51:36.186068  444547 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0819 18:51:36.186082  444547 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0819 18:51:36.186092  444547 command_runner.go:130] > # always happen on a node reboot
	I0819 18:51:36.186103  444547 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0819 18:51:36.186124  444547 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0819 18:51:36.186136  444547 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0819 18:51:36.186147  444547 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0819 18:51:36.186155  444547 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0819 18:51:36.186168  444547 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0819 18:51:36.186183  444547 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0819 18:51:36.186193  444547 command_runner.go:130] > # internal_wipe = true
	I0819 18:51:36.186206  444547 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0819 18:51:36.186217  444547 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0819 18:51:36.186227  444547 command_runner.go:130] > # internal_repair = false
	I0819 18:51:36.186235  444547 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0819 18:51:36.186247  444547 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0819 18:51:36.186256  444547 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0819 18:51:36.186268  444547 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0819 18:51:36.186303  444547 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0819 18:51:36.186317  444547 command_runner.go:130] > [crio.api]
	I0819 18:51:36.186326  444547 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0819 18:51:36.186333  444547 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0819 18:51:36.186342  444547 command_runner.go:130] > # IP address on which the stream server will listen.
	I0819 18:51:36.186353  444547 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0819 18:51:36.186363  444547 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0819 18:51:36.186374  444547 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0819 18:51:36.186386  444547 command_runner.go:130] > # stream_port = "0"
	I0819 18:51:36.186395  444547 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0819 18:51:36.186402  444547 command_runner.go:130] > # stream_enable_tls = false
	I0819 18:51:36.186409  444547 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0819 18:51:36.186418  444547 command_runner.go:130] > # stream_idle_timeout = ""
	I0819 18:51:36.186429  444547 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0819 18:51:36.186441  444547 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0819 18:51:36.186450  444547 command_runner.go:130] > # minutes.
	I0819 18:51:36.186457  444547 command_runner.go:130] > # stream_tls_cert = ""
	I0819 18:51:36.186468  444547 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0819 18:51:36.186486  444547 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0819 18:51:36.186498  444547 command_runner.go:130] > # stream_tls_key = ""
	I0819 18:51:36.186511  444547 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0819 18:51:36.186523  444547 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0819 18:51:36.186547  444547 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0819 18:51:36.186556  444547 command_runner.go:130] > # stream_tls_ca = ""
	I0819 18:51:36.186567  444547 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 18:51:36.186578  444547 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0819 18:51:36.186589  444547 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 18:51:36.186600  444547 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0819 18:51:36.186610  444547 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0819 18:51:36.186622  444547 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0819 18:51:36.186629  444547 command_runner.go:130] > [crio.runtime]
	I0819 18:51:36.186639  444547 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0819 18:51:36.186650  444547 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0819 18:51:36.186659  444547 command_runner.go:130] > # "nofile=1024:2048"
	I0819 18:51:36.186670  444547 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0819 18:51:36.186674  444547 command_runner.go:130] > # default_ulimits = [
	I0819 18:51:36.186678  444547 command_runner.go:130] > # ]
	I0819 18:51:36.186687  444547 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0819 18:51:36.186701  444547 command_runner.go:130] > # no_pivot = false
	I0819 18:51:36.186714  444547 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0819 18:51:36.186727  444547 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0819 18:51:36.186738  444547 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0819 18:51:36.186747  444547 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0819 18:51:36.186758  444547 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0819 18:51:36.186773  444547 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 18:51:36.186783  444547 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0819 18:51:36.186791  444547 command_runner.go:130] > # Cgroup setting for conmon
	I0819 18:51:36.186805  444547 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0819 18:51:36.186814  444547 command_runner.go:130] > conmon_cgroup = "pod"
	I0819 18:51:36.186824  444547 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0819 18:51:36.186834  444547 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0819 18:51:36.186845  444547 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 18:51:36.186855  444547 command_runner.go:130] > conmon_env = [
	I0819 18:51:36.186864  444547 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 18:51:36.186872  444547 command_runner.go:130] > ]
	I0819 18:51:36.186881  444547 command_runner.go:130] > # Additional environment variables to set for all the
	I0819 18:51:36.186891  444547 command_runner.go:130] > # containers. These are overridden if set in the
	I0819 18:51:36.186902  444547 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0819 18:51:36.186911  444547 command_runner.go:130] > # default_env = [
	I0819 18:51:36.186916  444547 command_runner.go:130] > # ]
	I0819 18:51:36.186957  444547 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0819 18:51:36.186977  444547 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0819 18:51:36.186983  444547 command_runner.go:130] > # selinux = false
	I0819 18:51:36.186992  444547 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0819 18:51:36.187004  444547 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0819 18:51:36.187019  444547 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0819 18:51:36.187029  444547 command_runner.go:130] > # seccomp_profile = ""
	I0819 18:51:36.187038  444547 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0819 18:51:36.187049  444547 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0819 18:51:36.187059  444547 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0819 18:51:36.187069  444547 command_runner.go:130] > # which might increase security.
	I0819 18:51:36.187074  444547 command_runner.go:130] > # This option is currently deprecated,
	I0819 18:51:36.187084  444547 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0819 18:51:36.187095  444547 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0819 18:51:36.187107  444547 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0819 18:51:36.187127  444547 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0819 18:51:36.187139  444547 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0819 18:51:36.187152  444547 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0819 18:51:36.187160  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.187167  444547 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0819 18:51:36.187178  444547 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0819 18:51:36.187188  444547 command_runner.go:130] > # the cgroup blockio controller.
	I0819 18:51:36.187200  444547 command_runner.go:130] > # blockio_config_file = ""
	I0819 18:51:36.187214  444547 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0819 18:51:36.187224  444547 command_runner.go:130] > # blockio parameters.
	I0819 18:51:36.187231  444547 command_runner.go:130] > # blockio_reload = false
	I0819 18:51:36.187241  444547 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0819 18:51:36.187250  444547 command_runner.go:130] > # irqbalance daemon.
	I0819 18:51:36.187259  444547 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0819 18:51:36.187271  444547 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0819 18:51:36.187285  444547 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0819 18:51:36.187297  444547 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0819 18:51:36.187309  444547 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0819 18:51:36.187322  444547 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0819 18:51:36.187332  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.187344  444547 command_runner.go:130] > # rdt_config_file = ""
	I0819 18:51:36.187353  444547 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0819 18:51:36.187363  444547 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0819 18:51:36.187390  444547 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0819 18:51:36.187400  444547 command_runner.go:130] > # separate_pull_cgroup = ""
	I0819 18:51:36.187410  444547 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0819 18:51:36.187425  444547 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0819 18:51:36.187435  444547 command_runner.go:130] > # will be added.
	I0819 18:51:36.187442  444547 command_runner.go:130] > # default_capabilities = [
	I0819 18:51:36.187451  444547 command_runner.go:130] > # 	"CHOWN",
	I0819 18:51:36.187458  444547 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0819 18:51:36.187466  444547 command_runner.go:130] > # 	"FSETID",
	I0819 18:51:36.187476  444547 command_runner.go:130] > # 	"FOWNER",
	I0819 18:51:36.187484  444547 command_runner.go:130] > # 	"SETGID",
	I0819 18:51:36.187490  444547 command_runner.go:130] > # 	"SETUID",
	I0819 18:51:36.187499  444547 command_runner.go:130] > # 	"SETPCAP",
	I0819 18:51:36.187506  444547 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0819 18:51:36.187516  444547 command_runner.go:130] > # 	"KILL",
	I0819 18:51:36.187521  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187536  444547 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0819 18:51:36.187549  444547 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0819 18:51:36.187564  444547 command_runner.go:130] > # add_inheritable_capabilities = false
	I0819 18:51:36.187577  444547 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0819 18:51:36.187588  444547 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 18:51:36.187595  444547 command_runner.go:130] > default_sysctls = [
	I0819 18:51:36.187599  444547 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0819 18:51:36.187602  444547 command_runner.go:130] > ]
	I0819 18:51:36.187607  444547 command_runner.go:130] > # List of devices on the host that a
	I0819 18:51:36.187613  444547 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0819 18:51:36.187617  444547 command_runner.go:130] > # allowed_devices = [
	I0819 18:51:36.187621  444547 command_runner.go:130] > # 	"/dev/fuse",
	I0819 18:51:36.187626  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187637  444547 command_runner.go:130] > # List of additional devices. specified as
	I0819 18:51:36.187650  444547 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0819 18:51:36.187663  444547 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0819 18:51:36.187675  444547 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 18:51:36.187685  444547 command_runner.go:130] > # additional_devices = [
	I0819 18:51:36.187690  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187699  444547 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0819 18:51:36.187703  444547 command_runner.go:130] > # cdi_spec_dirs = [
	I0819 18:51:36.187707  444547 command_runner.go:130] > # 	"/etc/cdi",
	I0819 18:51:36.187711  444547 command_runner.go:130] > # 	"/var/run/cdi",
	I0819 18:51:36.187715  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187721  444547 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0819 18:51:36.187729  444547 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0819 18:51:36.187735  444547 command_runner.go:130] > # Defaults to false.
	I0819 18:51:36.187739  444547 command_runner.go:130] > # device_ownership_from_security_context = false
	I0819 18:51:36.187746  444547 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0819 18:51:36.187753  444547 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0819 18:51:36.187756  444547 command_runner.go:130] > # hooks_dir = [
	I0819 18:51:36.187761  444547 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0819 18:51:36.187766  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187775  444547 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0819 18:51:36.187788  444547 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0819 18:51:36.187800  444547 command_runner.go:130] > # its default mounts from the following two files:
	I0819 18:51:36.187808  444547 command_runner.go:130] > #
	I0819 18:51:36.187819  444547 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0819 18:51:36.187831  444547 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0819 18:51:36.187841  444547 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0819 18:51:36.187846  444547 command_runner.go:130] > #
	I0819 18:51:36.187856  444547 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0819 18:51:36.187870  444547 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0819 18:51:36.187887  444547 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0819 18:51:36.187899  444547 command_runner.go:130] > #      only add mounts it finds in this file.
	I0819 18:51:36.187907  444547 command_runner.go:130] > #
	I0819 18:51:36.187915  444547 command_runner.go:130] > # default_mounts_file = ""
	I0819 18:51:36.187927  444547 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0819 18:51:36.187940  444547 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0819 18:51:36.187948  444547 command_runner.go:130] > pids_limit = 1024
	I0819 18:51:36.187961  444547 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0819 18:51:36.187976  444547 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0819 18:51:36.187989  444547 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0819 18:51:36.188004  444547 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0819 18:51:36.188020  444547 command_runner.go:130] > # log_size_max = -1
	I0819 18:51:36.188034  444547 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0819 18:51:36.188043  444547 command_runner.go:130] > # log_to_journald = false
	I0819 18:51:36.188053  444547 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0819 18:51:36.188064  444547 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0819 18:51:36.188076  444547 command_runner.go:130] > # Path to directory for container attach sockets.
	I0819 18:51:36.188084  444547 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0819 18:51:36.188095  444547 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0819 18:51:36.188103  444547 command_runner.go:130] > # bind_mount_prefix = ""
	I0819 18:51:36.188113  444547 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0819 18:51:36.188123  444547 command_runner.go:130] > # read_only = false
	I0819 18:51:36.188133  444547 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0819 18:51:36.188144  444547 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0819 18:51:36.188151  444547 command_runner.go:130] > # live configuration reload.
	I0819 18:51:36.188161  444547 command_runner.go:130] > # log_level = "info"
	I0819 18:51:36.188171  444547 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0819 18:51:36.188182  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.188190  444547 command_runner.go:130] > # log_filter = ""
	I0819 18:51:36.188199  444547 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0819 18:51:36.188216  444547 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0819 18:51:36.188225  444547 command_runner.go:130] > # separated by comma.
	I0819 18:51:36.188237  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188247  444547 command_runner.go:130] > # uid_mappings = ""
	I0819 18:51:36.188257  444547 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0819 18:51:36.188269  444547 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0819 18:51:36.188278  444547 command_runner.go:130] > # separated by comma.
	I0819 18:51:36.188293  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188303  444547 command_runner.go:130] > # gid_mappings = ""
	I0819 18:51:36.188313  444547 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0819 18:51:36.188325  444547 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 18:51:36.188337  444547 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 18:51:36.188351  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188359  444547 command_runner.go:130] > # minimum_mappable_uid = -1
	I0819 18:51:36.188366  444547 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0819 18:51:36.188375  444547 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 18:51:36.188381  444547 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 18:51:36.188390  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188394  444547 command_runner.go:130] > # minimum_mappable_gid = -1
	I0819 18:51:36.188402  444547 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0819 18:51:36.188408  444547 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0819 18:51:36.188415  444547 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0819 18:51:36.188419  444547 command_runner.go:130] > # ctr_stop_timeout = 30
	I0819 18:51:36.188424  444547 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0819 18:51:36.188430  444547 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0819 18:51:36.188437  444547 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0819 18:51:36.188441  444547 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0819 18:51:36.188445  444547 command_runner.go:130] > drop_infra_ctr = false
	I0819 18:51:36.188451  444547 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0819 18:51:36.188458  444547 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0819 18:51:36.188465  444547 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0819 18:51:36.188471  444547 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0819 18:51:36.188482  444547 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0819 18:51:36.188489  444547 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0819 18:51:36.188495  444547 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0819 18:51:36.188502  444547 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0819 18:51:36.188506  444547 command_runner.go:130] > # shared_cpuset = ""
	I0819 18:51:36.188514  444547 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0819 18:51:36.188519  444547 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0819 18:51:36.188524  444547 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0819 18:51:36.188531  444547 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0819 18:51:36.188537  444547 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0819 18:51:36.188549  444547 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0819 18:51:36.188561  444547 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0819 18:51:36.188571  444547 command_runner.go:130] > # enable_criu_support = false
	I0819 18:51:36.188579  444547 command_runner.go:130] > # Enable/disable the generation of the container,
	I0819 18:51:36.188591  444547 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0819 18:51:36.188598  444547 command_runner.go:130] > # enable_pod_events = false
	I0819 18:51:36.188604  444547 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 18:51:36.188613  444547 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 18:51:36.188620  444547 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0819 18:51:36.188626  444547 command_runner.go:130] > # default_runtime = "runc"
	I0819 18:51:36.188631  444547 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0819 18:51:36.188638  444547 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0819 18:51:36.188649  444547 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0819 18:51:36.188656  444547 command_runner.go:130] > # creation as a file is not desired either.
	I0819 18:51:36.188664  444547 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0819 18:51:36.188671  444547 command_runner.go:130] > # the hostname is being managed dynamically.
	I0819 18:51:36.188675  444547 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0819 18:51:36.188681  444547 command_runner.go:130] > # ]
	I0819 18:51:36.188686  444547 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0819 18:51:36.188694  444547 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0819 18:51:36.188700  444547 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0819 18:51:36.188708  444547 command_runner.go:130] > # Each entry in the table should follow the format:
	I0819 18:51:36.188711  444547 command_runner.go:130] > #
	I0819 18:51:36.188716  444547 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0819 18:51:36.188720  444547 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0819 18:51:36.188744  444547 command_runner.go:130] > # runtime_type = "oci"
	I0819 18:51:36.188752  444547 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0819 18:51:36.188757  444547 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0819 18:51:36.188763  444547 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0819 18:51:36.188768  444547 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0819 18:51:36.188774  444547 command_runner.go:130] > # monitor_env = []
	I0819 18:51:36.188778  444547 command_runner.go:130] > # privileged_without_host_devices = false
	I0819 18:51:36.188782  444547 command_runner.go:130] > # allowed_annotations = []
	I0819 18:51:36.188790  444547 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0819 18:51:36.188795  444547 command_runner.go:130] > # Where:
	I0819 18:51:36.188800  444547 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0819 18:51:36.188806  444547 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0819 18:51:36.188813  444547 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0819 18:51:36.188822  444547 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0819 18:51:36.188828  444547 command_runner.go:130] > #   in $PATH.
	I0819 18:51:36.188834  444547 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0819 18:51:36.188839  444547 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0819 18:51:36.188845  444547 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0819 18:51:36.188851  444547 command_runner.go:130] > #   state.
	I0819 18:51:36.188858  444547 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0819 18:51:36.188865  444547 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0819 18:51:36.188871  444547 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0819 18:51:36.188879  444547 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0819 18:51:36.188885  444547 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0819 18:51:36.188893  444547 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0819 18:51:36.188898  444547 command_runner.go:130] > #   The currently recognized values are:
	I0819 18:51:36.188904  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0819 18:51:36.188911  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0819 18:51:36.188917  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0819 18:51:36.188925  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0819 18:51:36.188934  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0819 18:51:36.188940  444547 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0819 18:51:36.188948  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0819 18:51:36.188954  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0819 18:51:36.188962  444547 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0819 18:51:36.188968  444547 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0819 18:51:36.188972  444547 command_runner.go:130] > #   deprecated option "conmon".
	I0819 18:51:36.188979  444547 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0819 18:51:36.188985  444547 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0819 18:51:36.188992  444547 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0819 18:51:36.188998  444547 command_runner.go:130] > #   should be moved to the container's cgroup
	I0819 18:51:36.189006  444547 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0819 18:51:36.189013  444547 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0819 18:51:36.189019  444547 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0819 18:51:36.189026  444547 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0819 18:51:36.189031  444547 command_runner.go:130] > #
	I0819 18:51:36.189041  444547 command_runner.go:130] > # Using the seccomp notifier feature:
	I0819 18:51:36.189044  444547 command_runner.go:130] > #
	I0819 18:51:36.189051  444547 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0819 18:51:36.189058  444547 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0819 18:51:36.189062  444547 command_runner.go:130] > #
	I0819 18:51:36.189070  444547 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0819 18:51:36.189078  444547 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0819 18:51:36.189082  444547 command_runner.go:130] > #
	I0819 18:51:36.189089  444547 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0819 18:51:36.189095  444547 command_runner.go:130] > # feature.
	I0819 18:51:36.189100  444547 command_runner.go:130] > #
	I0819 18:51:36.189106  444547 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0819 18:51:36.189114  444547 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0819 18:51:36.189120  444547 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0819 18:51:36.189127  444547 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0819 18:51:36.189146  444547 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0819 18:51:36.189154  444547 command_runner.go:130] > #
	I0819 18:51:36.189163  444547 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0819 18:51:36.189174  444547 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0819 18:51:36.189178  444547 command_runner.go:130] > #
	I0819 18:51:36.189184  444547 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0819 18:51:36.189192  444547 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0819 18:51:36.189195  444547 command_runner.go:130] > #
	I0819 18:51:36.189203  444547 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0819 18:51:36.189209  444547 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0819 18:51:36.189214  444547 command_runner.go:130] > # limitation.
	I0819 18:51:36.189220  444547 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0819 18:51:36.189226  444547 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0819 18:51:36.189230  444547 command_runner.go:130] > runtime_type = "oci"
	I0819 18:51:36.189234  444547 command_runner.go:130] > runtime_root = "/run/runc"
	I0819 18:51:36.189240  444547 command_runner.go:130] > runtime_config_path = ""
	I0819 18:51:36.189244  444547 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0819 18:51:36.189248  444547 command_runner.go:130] > monitor_cgroup = "pod"
	I0819 18:51:36.189252  444547 command_runner.go:130] > monitor_exec_cgroup = ""
	I0819 18:51:36.189256  444547 command_runner.go:130] > monitor_env = [
	I0819 18:51:36.189261  444547 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 18:51:36.189266  444547 command_runner.go:130] > ]
	I0819 18:51:36.189270  444547 command_runner.go:130] > privileged_without_host_devices = false
	I0819 18:51:36.189278  444547 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0819 18:51:36.189283  444547 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0819 18:51:36.189291  444547 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0819 18:51:36.189302  444547 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0819 18:51:36.189311  444547 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0819 18:51:36.189317  444547 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0819 18:51:36.189328  444547 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0819 18:51:36.189339  444547 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0819 18:51:36.189346  444547 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0819 18:51:36.189353  444547 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0819 18:51:36.189358  444547 command_runner.go:130] > # Example:
	I0819 18:51:36.189363  444547 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0819 18:51:36.189370  444547 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0819 18:51:36.189374  444547 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0819 18:51:36.189382  444547 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0819 18:51:36.189386  444547 command_runner.go:130] > # cpuset = 0
	I0819 18:51:36.189393  444547 command_runner.go:130] > # cpushares = "0-1"
	I0819 18:51:36.189396  444547 command_runner.go:130] > # Where:
	I0819 18:51:36.189401  444547 command_runner.go:130] > # The workload name is workload-type.
	I0819 18:51:36.189409  444547 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0819 18:51:36.189415  444547 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0819 18:51:36.189422  444547 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0819 18:51:36.189430  444547 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0819 18:51:36.189437  444547 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0819 18:51:36.189442  444547 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0819 18:51:36.189449  444547 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0819 18:51:36.189455  444547 command_runner.go:130] > # Default value is set to true
	I0819 18:51:36.189459  444547 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0819 18:51:36.189469  444547 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0819 18:51:36.189478  444547 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0819 18:51:36.189484  444547 command_runner.go:130] > # Default value is set to 'false'
	I0819 18:51:36.189489  444547 command_runner.go:130] > # disable_hostport_mapping = false
	I0819 18:51:36.189497  444547 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0819 18:51:36.189500  444547 command_runner.go:130] > #
	I0819 18:51:36.189505  444547 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0819 18:51:36.189513  444547 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0819 18:51:36.189519  444547 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0819 18:51:36.189528  444547 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0819 18:51:36.189536  444547 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0819 18:51:36.189542  444547 command_runner.go:130] > [crio.image]
	I0819 18:51:36.189548  444547 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0819 18:51:36.189554  444547 command_runner.go:130] > # default_transport = "docker://"
	I0819 18:51:36.189560  444547 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0819 18:51:36.189569  444547 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0819 18:51:36.189574  444547 command_runner.go:130] > # global_auth_file = ""
	I0819 18:51:36.189578  444547 command_runner.go:130] > # The image used to instantiate infra containers.
	I0819 18:51:36.189583  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.189590  444547 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0819 18:51:36.189596  444547 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0819 18:51:36.189604  444547 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0819 18:51:36.189609  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.189615  444547 command_runner.go:130] > # pause_image_auth_file = ""
	I0819 18:51:36.189620  444547 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0819 18:51:36.189626  444547 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0819 18:51:36.189632  444547 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0819 18:51:36.189639  444547 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0819 18:51:36.189643  444547 command_runner.go:130] > # pause_command = "/pause"
	I0819 18:51:36.189649  444547 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0819 18:51:36.189655  444547 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0819 18:51:36.189660  444547 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0819 18:51:36.189670  444547 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0819 18:51:36.189678  444547 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0819 18:51:36.189684  444547 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0819 18:51:36.189690  444547 command_runner.go:130] > # pinned_images = [
	I0819 18:51:36.189693  444547 command_runner.go:130] > # ]
	I0819 18:51:36.189700  444547 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0819 18:51:36.189707  444547 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0819 18:51:36.189713  444547 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0819 18:51:36.189721  444547 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0819 18:51:36.189726  444547 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0819 18:51:36.189732  444547 command_runner.go:130] > # signature_policy = ""
	I0819 18:51:36.189737  444547 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0819 18:51:36.189744  444547 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0819 18:51:36.189754  444547 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0819 18:51:36.189762  444547 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0819 18:51:36.189770  444547 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0819 18:51:36.189775  444547 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0819 18:51:36.189781  444547 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0819 18:51:36.189786  444547 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0819 18:51:36.189791  444547 command_runner.go:130] > # changing them here.
	I0819 18:51:36.189795  444547 command_runner.go:130] > # insecure_registries = [
	I0819 18:51:36.189798  444547 command_runner.go:130] > # ]
	I0819 18:51:36.189804  444547 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0819 18:51:36.189808  444547 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0819 18:51:36.189812  444547 command_runner.go:130] > # image_volumes = "mkdir"
	I0819 18:51:36.189816  444547 command_runner.go:130] > # Temporary directory to use for storing big files
	I0819 18:51:36.189820  444547 command_runner.go:130] > # big_files_temporary_dir = ""
	I0819 18:51:36.189826  444547 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0819 18:51:36.189829  444547 command_runner.go:130] > # CNI plugins.
	I0819 18:51:36.189832  444547 command_runner.go:130] > [crio.network]
	I0819 18:51:36.189838  444547 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0819 18:51:36.189842  444547 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0819 18:51:36.189847  444547 command_runner.go:130] > # cni_default_network = ""
	I0819 18:51:36.189851  444547 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0819 18:51:36.189855  444547 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0819 18:51:36.189860  444547 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0819 18:51:36.189863  444547 command_runner.go:130] > # plugin_dirs = [
	I0819 18:51:36.189867  444547 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0819 18:51:36.189870  444547 command_runner.go:130] > # ]
	I0819 18:51:36.189875  444547 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0819 18:51:36.189879  444547 command_runner.go:130] > [crio.metrics]
	I0819 18:51:36.189883  444547 command_runner.go:130] > # Globally enable or disable metrics support.
	I0819 18:51:36.189887  444547 command_runner.go:130] > enable_metrics = true
	I0819 18:51:36.189891  444547 command_runner.go:130] > # Specify enabled metrics collectors.
	I0819 18:51:36.189895  444547 command_runner.go:130] > # Per default all metrics are enabled.
	I0819 18:51:36.189900  444547 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0819 18:51:36.189906  444547 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0819 18:51:36.189911  444547 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0819 18:51:36.189915  444547 command_runner.go:130] > # metrics_collectors = [
	I0819 18:51:36.189918  444547 command_runner.go:130] > # 	"operations",
	I0819 18:51:36.189923  444547 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0819 18:51:36.189927  444547 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0819 18:51:36.189931  444547 command_runner.go:130] > # 	"operations_errors",
	I0819 18:51:36.189935  444547 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0819 18:51:36.189938  444547 command_runner.go:130] > # 	"image_pulls_by_name",
	I0819 18:51:36.189946  444547 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0819 18:51:36.189950  444547 command_runner.go:130] > # 	"image_pulls_failures",
	I0819 18:51:36.189954  444547 command_runner.go:130] > # 	"image_pulls_successes",
	I0819 18:51:36.189958  444547 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0819 18:51:36.189962  444547 command_runner.go:130] > # 	"image_layer_reuse",
	I0819 18:51:36.189970  444547 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0819 18:51:36.189973  444547 command_runner.go:130] > # 	"containers_oom_total",
	I0819 18:51:36.189977  444547 command_runner.go:130] > # 	"containers_oom",
	I0819 18:51:36.189980  444547 command_runner.go:130] > # 	"processes_defunct",
	I0819 18:51:36.189984  444547 command_runner.go:130] > # 	"operations_total",
	I0819 18:51:36.189988  444547 command_runner.go:130] > # 	"operations_latency_seconds",
	I0819 18:51:36.189993  444547 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0819 18:51:36.189997  444547 command_runner.go:130] > # 	"operations_errors_total",
	I0819 18:51:36.190001  444547 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0819 18:51:36.190005  444547 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0819 18:51:36.190009  444547 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0819 18:51:36.190013  444547 command_runner.go:130] > # 	"image_pulls_success_total",
	I0819 18:51:36.190017  444547 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0819 18:51:36.190021  444547 command_runner.go:130] > # 	"containers_oom_count_total",
	I0819 18:51:36.190026  444547 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0819 18:51:36.190033  444547 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0819 18:51:36.190035  444547 command_runner.go:130] > # ]
	I0819 18:51:36.190040  444547 command_runner.go:130] > # The port on which the metrics server will listen.
	I0819 18:51:36.190046  444547 command_runner.go:130] > # metrics_port = 9090
	I0819 18:51:36.190051  444547 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0819 18:51:36.190055  444547 command_runner.go:130] > # metrics_socket = ""
	I0819 18:51:36.190061  444547 command_runner.go:130] > # The certificate for the secure metrics server.
	I0819 18:51:36.190069  444547 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0819 18:51:36.190075  444547 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0819 18:51:36.190082  444547 command_runner.go:130] > # certificate on any modification event.
	I0819 18:51:36.190085  444547 command_runner.go:130] > # metrics_cert = ""
	I0819 18:51:36.190090  444547 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0819 18:51:36.190097  444547 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0819 18:51:36.190101  444547 command_runner.go:130] > # metrics_key = ""
	I0819 18:51:36.190106  444547 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0819 18:51:36.190110  444547 command_runner.go:130] > [crio.tracing]
	I0819 18:51:36.190117  444547 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0819 18:51:36.190124  444547 command_runner.go:130] > # enable_tracing = false
	I0819 18:51:36.190129  444547 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0819 18:51:36.190135  444547 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0819 18:51:36.190142  444547 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0819 18:51:36.190147  444547 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0819 18:51:36.190151  444547 command_runner.go:130] > # CRI-O NRI configuration.
	I0819 18:51:36.190154  444547 command_runner.go:130] > [crio.nri]
	I0819 18:51:36.190158  444547 command_runner.go:130] > # Globally enable or disable NRI.
	I0819 18:51:36.190167  444547 command_runner.go:130] > # enable_nri = false
	I0819 18:51:36.190172  444547 command_runner.go:130] > # NRI socket to listen on.
	I0819 18:51:36.190177  444547 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0819 18:51:36.190183  444547 command_runner.go:130] > # NRI plugin directory to use.
	I0819 18:51:36.190188  444547 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0819 18:51:36.190194  444547 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0819 18:51:36.190198  444547 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0819 18:51:36.190205  444547 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0819 18:51:36.190209  444547 command_runner.go:130] > # nri_disable_connections = false
	I0819 18:51:36.190217  444547 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0819 18:51:36.190221  444547 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0819 18:51:36.190228  444547 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0819 18:51:36.190233  444547 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0819 18:51:36.190238  444547 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0819 18:51:36.190243  444547 command_runner.go:130] > [crio.stats]
	I0819 18:51:36.190249  444547 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0819 18:51:36.190255  444547 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0819 18:51:36.190259  444547 command_runner.go:130] > # stats_collection_period = 0
	I0819 18:51:36.190450  444547 command_runner.go:130] ! time="2024-08-19 18:51:36.161529726Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0819 18:51:36.190501  444547 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0819 18:51:36.190630  444547 cni.go:84] Creating CNI manager for ""
	I0819 18:51:36.190641  444547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:51:36.190651  444547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:51:36.190674  444547 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.22 APIServerPort:8441 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-124593 NodeName:functional-124593 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:51:36.190815  444547 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.22
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-124593"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:51:36.190886  444547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:51:36.200955  444547 command_runner.go:130] > kubeadm
	I0819 18:51:36.200981  444547 command_runner.go:130] > kubectl
	I0819 18:51:36.200986  444547 command_runner.go:130] > kubelet
	I0819 18:51:36.201016  444547 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:51:36.201072  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 18:51:36.211041  444547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0819 18:51:36.228264  444547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:51:36.245722  444547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0819 18:51:36.263018  444547 ssh_runner.go:195] Run: grep 192.168.39.22	control-plane.minikube.internal$ /etc/hosts
	I0819 18:51:36.267130  444547 command_runner.go:130] > 192.168.39.22	control-plane.minikube.internal
	I0819 18:51:36.267229  444547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:51:36.398107  444547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:51:36.412895  444547 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593 for IP: 192.168.39.22
	I0819 18:51:36.412924  444547 certs.go:194] generating shared ca certs ...
	I0819 18:51:36.412943  444547 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:51:36.413154  444547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 18:51:36.413203  444547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 18:51:36.413217  444547 certs.go:256] generating profile certs ...
	I0819 18:51:36.413317  444547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.key
	I0819 18:51:36.413414  444547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.key.aa5a99d1
	I0819 18:51:36.413463  444547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.key
	I0819 18:51:36.413478  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 18:51:36.413496  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 18:51:36.413514  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 18:51:36.413543  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 18:51:36.413558  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 18:51:36.413577  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 18:51:36.413596  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 18:51:36.413612  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 18:51:36.413684  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 18:51:36.413728  444547 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 18:51:36.413741  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 18:51:36.413782  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 18:51:36.413816  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:51:36.413853  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 18:51:36.413906  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 18:51:36.413944  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.413964  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem -> /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.413981  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.414774  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:51:36.439176  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:51:36.463796  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:51:36.490998  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 18:51:36.514746  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 18:51:36.538661  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 18:51:36.562630  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:51:36.586739  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 18:51:36.610889  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:51:36.634562  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 18:51:36.658286  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 18:51:36.681715  444547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:51:36.698451  444547 ssh_runner.go:195] Run: openssl version
	I0819 18:51:36.704220  444547 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0819 18:51:36.704339  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 18:51:36.715389  444547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.720025  444547 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.720080  444547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.720142  444547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.725901  444547 command_runner.go:130] > 51391683
	I0819 18:51:36.726015  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 18:51:36.736206  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 18:51:36.747737  444547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.752558  444547 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.752599  444547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.752642  444547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.758223  444547 command_runner.go:130] > 3ec20f2e
	I0819 18:51:36.758300  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:51:36.767946  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:51:36.779143  444547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.783850  444547 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.783902  444547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.783950  444547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.789800  444547 command_runner.go:130] > b5213941
	I0819 18:51:36.789894  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:51:36.799700  444547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:51:36.804144  444547 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:51:36.804180  444547 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0819 18:51:36.804188  444547 command_runner.go:130] > Device: 253,1	Inode: 4197398     Links: 1
	I0819 18:51:36.804194  444547 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 18:51:36.804201  444547 command_runner.go:130] > Access: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804206  444547 command_runner.go:130] > Modify: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804217  444547 command_runner.go:130] > Change: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804222  444547 command_runner.go:130] >  Birth: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804284  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 18:51:36.810230  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.810339  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 18:51:36.816159  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.816241  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 18:51:36.821909  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.822019  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 18:51:36.827758  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.827847  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 18:51:36.833329  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.833420  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 18:51:36.838995  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.839152  444547 kubeadm.go:392] StartCluster: {Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:51:36.839251  444547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:51:36.839310  444547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:51:36.874453  444547 command_runner.go:130] > e908c3e4c32bde71b6441e3ee7b44b5559f7c82f0a6a46e750222d6420ee5768
	I0819 18:51:36.874803  444547 command_runner.go:130] > 790104913a298ce37e7cba7691f8e4d617c4a8638cb821e244b70555efd66fbf
	I0819 18:51:36.874823  444547 command_runner.go:130] > aa84abbb4c9f6505401955b079afdb2f11c046f6879ee08a957007aaca875b03
	I0819 18:51:36.874834  444547 command_runner.go:130] > d8bfd36fbe96397e082982d91cabf0e8c731c94d88ed6c685f08a18b2da4cc6c
	I0819 18:51:36.874843  444547 command_runner.go:130] > e4040e2a8622453f1ef2fd04190fd96bea2fd3de919c6b06deaa38e83d38cf5b
	I0819 18:51:36.874899  444547 command_runner.go:130] > 8df7ac76fe134c848ddad74ca75f51e5c19afffdbd0712cb3d74333cdc6b91dc
	I0819 18:51:36.875009  444547 command_runner.go:130] > 94e0fe7cb19015bd08c7406824ba63263d67eec22f1240ef0f0eaff258976b4f
	I0819 18:51:36.875035  444547 command_runner.go:130] > 871ee5a26fc4d8fe6f04e66a22c78ba6e6b80077bd1066ab3a3c1acb639de113
	I0819 18:51:36.875045  444547 command_runner.go:130] > 70ce15fbc3bc32cd55767aab9cde75fc9d6f452d9c22c53034e5c2af14442b32
	I0819 18:51:36.875236  444547 command_runner.go:130] > 7703464bfd87d33565890b044096dcd7f50a8b1340ec99f2a88431946c69fc1b
	I0819 18:51:36.875268  444547 command_runner.go:130] > d65fc62d1475e4da38a8c95d6b13d520659291d23ac972a2d197d382a7fa6027
	I0819 18:51:36.875360  444547 command_runner.go:130] > d89c8be1ccfda0fbaa81ee519dc2e56af6349b6060b1afd9b3ac512310edc348
	I0819 18:51:36.875408  444547 command_runner.go:130] > e38111b143b726d6b06442db0379d3ecea609946581cb76904945ed68a359c74
	I0819 18:51:36.876958  444547 cri.go:89] found id: "e908c3e4c32bde71b6441e3ee7b44b5559f7c82f0a6a46e750222d6420ee5768"
	I0819 18:51:36.876978  444547 cri.go:89] found id: "790104913a298ce37e7cba7691f8e4d617c4a8638cb821e244b70555efd66fbf"
	I0819 18:51:36.876984  444547 cri.go:89] found id: "aa84abbb4c9f6505401955b079afdb2f11c046f6879ee08a957007aaca875b03"
	I0819 18:51:36.876989  444547 cri.go:89] found id: "d8bfd36fbe96397e082982d91cabf0e8c731c94d88ed6c685f08a18b2da4cc6c"
	I0819 18:51:36.876993  444547 cri.go:89] found id: "e4040e2a8622453f1ef2fd04190fd96bea2fd3de919c6b06deaa38e83d38cf5b"
	I0819 18:51:36.876998  444547 cri.go:89] found id: "8df7ac76fe134c848ddad74ca75f51e5c19afffdbd0712cb3d74333cdc6b91dc"
	I0819 18:51:36.877002  444547 cri.go:89] found id: "94e0fe7cb19015bd08c7406824ba63263d67eec22f1240ef0f0eaff258976b4f"
	I0819 18:51:36.877006  444547 cri.go:89] found id: "871ee5a26fc4d8fe6f04e66a22c78ba6e6b80077bd1066ab3a3c1acb639de113"
	I0819 18:51:36.877010  444547 cri.go:89] found id: "70ce15fbc3bc32cd55767aab9cde75fc9d6f452d9c22c53034e5c2af14442b32"
	I0819 18:51:36.877024  444547 cri.go:89] found id: "7703464bfd87d33565890b044096dcd7f50a8b1340ec99f2a88431946c69fc1b"
	I0819 18:51:36.877032  444547 cri.go:89] found id: "d65fc62d1475e4da38a8c95d6b13d520659291d23ac972a2d197d382a7fa6027"
	I0819 18:51:36.877036  444547 cri.go:89] found id: "d89c8be1ccfda0fbaa81ee519dc2e56af6349b6060b1afd9b3ac512310edc348"
	I0819 18:51:36.877040  444547 cri.go:89] found id: "e38111b143b726d6b06442db0379d3ecea609946581cb76904945ed68a359c74"
	I0819 18:51:36.877044  444547 cri.go:89] found id: ""
	I0819 18:51:36.877087  444547 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.560582619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094231560556649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79e00d7a-5cfe-4eb3-a265-faf1ada01f6c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.561075722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c02c96a0-7b8b-442e-947c-8a0b256b55a2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.561126009Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c02c96a0-7b8b-442e-947c-8a0b256b55a2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.561212094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329128126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restart
Count: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d,PodSandboxId:59013506b9174ea339d22d4e1595ce75d5914ac9c880e68a9b739018f586ec2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094163327692056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15de45e6effb382c12ca8494f33bff76,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 15,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca,PodSandboxId:ddca0e39cb48d9da271cf14439314e1118baec86bc841a867aeb3fe42df61ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724093988918818534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c02c96a0-7b8b-442e-947c-8a0b256b55a2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.592453903Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=040ac7c5-6105-484d-9126-9575e670ec3f name=/runtime.v1.RuntimeService/Version
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.592625792Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=040ac7c5-6105-484d-9126-9575e670ec3f name=/runtime.v1.RuntimeService/Version
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.593940362Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6199e90-18bf-44bb-9aca-8351145cec86 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.594296272Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094231594276590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6199e90-18bf-44bb-9aca-8351145cec86 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.594911265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7095a632-f4b5-4ab3-ad61-e6fbcf9160e5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.595002291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7095a632-f4b5-4ab3-ad61-e6fbcf9160e5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.595089974Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329128126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restart
Count: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d,PodSandboxId:59013506b9174ea339d22d4e1595ce75d5914ac9c880e68a9b739018f586ec2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094163327692056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15de45e6effb382c12ca8494f33bff76,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 15,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca,PodSandboxId:ddca0e39cb48d9da271cf14439314e1118baec86bc841a867aeb3fe42df61ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724093988918818534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7095a632-f4b5-4ab3-ad61-e6fbcf9160e5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.634823022Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51cefc03-a36c-4a85-9ceb-abe6ff6eaf1a name=/runtime.v1.RuntimeService/Version
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.634923587Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51cefc03-a36c-4a85-9ceb-abe6ff6eaf1a name=/runtime.v1.RuntimeService/Version
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.636386735Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=443f5af9-3680-4102-a5b1-f18d1775061e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.636898502Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094231636874076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=443f5af9-3680-4102-a5b1-f18d1775061e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.637336517Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea01f498-110a-4b79-803c-120986244bfe name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.637414571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea01f498-110a-4b79-803c-120986244bfe name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.637540123Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329128126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restart
Count: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d,PodSandboxId:59013506b9174ea339d22d4e1595ce75d5914ac9c880e68a9b739018f586ec2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094163327692056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15de45e6effb382c12ca8494f33bff76,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 15,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca,PodSandboxId:ddca0e39cb48d9da271cf14439314e1118baec86bc841a867aeb3fe42df61ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724093988918818534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea01f498-110a-4b79-803c-120986244bfe name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.668135052Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=844398c3-cbc3-4593-bb7c-4d47a742fdc4 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.668226344Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=844398c3-cbc3-4593-bb7c-4d47a742fdc4 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.669437116Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d329c961-96ac-482f-811f-6336f3e98de5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.669981371Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094231669955949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d329c961-96ac-482f-811f-6336f3e98de5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.670635407Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a26ca7be-ea27-41e7-83e8-aa5b801beffb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.670702450Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a26ca7be-ea27-41e7-83e8-aa5b801beffb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:03:51 functional-124593 crio[3397]: time="2024-08-19 19:03:51.670793294Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329128126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restart
Count: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d,PodSandboxId:59013506b9174ea339d22d4e1595ce75d5914ac9c880e68a9b739018f586ec2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094163327692056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15de45e6effb382c12ca8494f33bff76,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 15,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca,PodSandboxId:ddca0e39cb48d9da271cf14439314e1118baec86bc841a867aeb3fe42df61ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724093988918818534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a26ca7be-ea27-41e7-83e8-aa5b801beffb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e764198234f75       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   About a minute ago   Exited              kube-controller-manager   15                  1b98c8cb37fd8       kube-controller-manager-functional-124593
	effebbec1cbf2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   About a minute ago   Exited              kube-apiserver            15                  59013506b9174       kube-apiserver-functional-124593
	e3ddc8f73f9e6       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   4 minutes ago        Running             kube-scheduler            4                   ddca0e39cb48d       kube-scheduler-functional-124593
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.066009] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.197712] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.124470] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.281624] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +4.011758] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +4.140421] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +0.057144] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.989777] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	[  +0.082461] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.724179] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +0.114451] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.497778] kauditd_printk_skb: 98 callbacks suppressed
	[Aug19 18:50] systemd-fstab-generator[3042]: Ignoring "noauto" option for root device
	[  +0.214760] systemd-fstab-generator[3119]: Ignoring "noauto" option for root device
	[  +0.240969] systemd-fstab-generator[3138]: Ignoring "noauto" option for root device
	[  +0.217290] systemd-fstab-generator[3183]: Ignoring "noauto" option for root device
	[  +0.371422] systemd-fstab-generator[3242]: Ignoring "noauto" option for root device
	[Aug19 18:51] systemd-fstab-generator[3507]: Ignoring "noauto" option for root device
	[  +0.085616] kauditd_printk_skb: 184 callbacks suppressed
	[  +1.984129] systemd-fstab-generator[3627]: Ignoring "noauto" option for root device
	[Aug19 18:52] kauditd_printk_skb: 81 callbacks suppressed
	[Aug19 18:55] systemd-fstab-generator[9158]: Ignoring "noauto" option for root device
	[Aug19 18:56] kauditd_printk_skb: 70 callbacks suppressed
	[Aug19 18:59] systemd-fstab-generator[10102]: Ignoring "noauto" option for root device
	[Aug19 19:00] kauditd_printk_skb: 54 callbacks suppressed
	
	
	==> kernel <==
	 19:03:51 up 14 min,  0 users,  load average: 0.10, 0.15, 0.10
	Linux functional-124593 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d] <==
	I0819 19:02:43.467753       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0819 19:02:43.743186       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:43.743280       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0819 19:02:43.751731       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0819 19:02:43.755112       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 19:02:43.758721       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 19:02:43.758852       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 19:02:43.759065       1 instance.go:232] Using reconciler: lease
	W0819 19:02:43.760135       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:44.743832       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:44.743963       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:44.761569       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:46.185098       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:46.207828       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:46.382784       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:48.822814       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:48.974921       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:49.351895       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:53.115051       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:53.237838       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:54.161281       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:59.099479       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:59.492816       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:03:01.794931       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0819 19:03:03.759664       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9] <==
	I0819 19:02:44.745523       1 serving.go:386] Generated self-signed cert in-memory
	I0819 19:02:44.990908       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 19:02:44.990991       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:02:44.992289       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 19:02:44.992410       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 19:02:44.992616       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 19:02:44.992692       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0819 19:03:04.995138       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.22:8441/healthz\": dial tcp 192.168.39.22:8441: connect: connection refused"
	
	
	==> kube-scheduler [e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca] <==
	E0819 19:03:04.765406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.22:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.39.22:38312->192.168.39.22:8441: read: connection reset by peer" logger="UnhandledError"
	W0819 19:03:13.315031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.22:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:13.315077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.22:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:21.103592       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.22:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:21.103636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.22:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:21.688385       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.22:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:21.688430       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.22:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:25.611942       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.22:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:25.611985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.22:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:29.616112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.22:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:29.616172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.22:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:32.074182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.22:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:32.074280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.22:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:34.545647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.22:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:34.545700       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.22:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:46.993572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.22:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:46.993633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.22:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:49.036950       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.22:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:49.037018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.22:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:49.224105       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.22:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:49.224150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.22:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:50.723059       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.22:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:50.723109       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.22:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:51.629827       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.22:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:51.629910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.22:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Aug 19 19:03:38 functional-124593 kubelet[10109]: E0819 19:03:38.320569   10109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-124593_kube-system(15de45e6effb382c12ca8494f33bff76)\"" pod="kube-system/kube-apiserver-functional-124593" podUID="15de45e6effb382c12ca8494f33bff76"
	Aug 19 19:03:38 functional-124593 kubelet[10109]: E0819 19:03:38.384299   10109 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094218384019399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:03:38 functional-124593 kubelet[10109]: E0819 19:03:38.384338   10109 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094218384019399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:03:38 functional-124593 kubelet[10109]: E0819 19:03:38.751115   10109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/default/events\": dial tcp 192.168.39.22:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-124593.17ed3659059f9af6  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-124593,UID:functional-124593,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-124593 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-124593,},FirstTimestamp:2024-08-19 18:59:48.327103222 +0000 UTC m=+0.349325093,LastTimestamp:2024-08-19 18:59:48.327103222 +0000 UTC m=+0.349325093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:fu
nctional-124593,}"
	Aug 19 19:03:39 functional-124593 kubelet[10109]: E0819 19:03:39.773367   10109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-124593?timeout=10s\": dial tcp 192.168.39.22:8441: connect: connection refused" interval="7s"
	Aug 19 19:03:41 functional-124593 kubelet[10109]: I0819 19:03:41.953583   10109 kubelet_node_status.go:72] "Attempting to register node" node="functional-124593"
	Aug 19 19:03:41 functional-124593 kubelet[10109]: E0819 19:03:41.954719   10109 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8441/api/v1/nodes\": dial tcp 192.168.39.22:8441: connect: connection refused" node="functional-124593"
	Aug 19 19:03:43 functional-124593 kubelet[10109]: E0819 19:03:43.326183   10109 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_etcd_etcd-functional-124593_kube-system_1d81c5d63cba07001a82e239314e39e2_1\" is already in use by 847f5422f7736630cbb85c950b0ce7fee365173fd629c7c35c4baca6131ab56d. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="fb4264c69f603d50f969b7ac2f0dad4c593bae0da887198d6e0d16aab460b73b"
	Aug 19 19:03:43 functional-124593 kubelet[10109]: E0819 19:03:43.326390   10109 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:etcd,Image:registry.k8s.io/etcd:3.5.15-0,Command:[etcd --advertise-client-urls=https://192.168.39.22:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/minikube/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://192.168.39.22:2380 --initial-cluster=functional-124593=https://192.168.39.22:2380 --key-file=/var/lib/minikube/certs/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.39.22:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.39.22:2380 --name=functional-124593 --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/var/lib/minikube/certs/etcd/peer.key --peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.
crt --proxy-refresh-interval=70000 --snapshot-count=10000 --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{104857600 0} {<nil>} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etcd-data,ReadOnly:false,MountPath:/var/lib/minikube/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-certs,ReadOnly:false,MountPath:/var/lib/minikube/certs/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{P
robeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-functional-124593_kube-system(1d81c5d63cba07001a82e239314e39e2): CreateContainerError: the c
ontainer name \"k8s_etcd_etcd-functional-124593_kube-system_1d81c5d63cba07001a82e239314e39e2_1\" is already in use by 847f5422f7736630cbb85c950b0ce7fee365173fd629c7c35c4baca6131ab56d. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Aug 19 19:03:43 functional-124593 kubelet[10109]: E0819 19:03:43.327654   10109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-functional-124593_kube-system_1d81c5d63cba07001a82e239314e39e2_1\\\" is already in use by 847f5422f7736630cbb85c950b0ce7fee365173fd629c7c35c4baca6131ab56d. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-functional-124593" podUID="1d81c5d63cba07001a82e239314e39e2"
	Aug 19 19:03:45 functional-124593 kubelet[10109]: W0819 19:03:45.673444   10109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	Aug 19 19:03:45 functional-124593 kubelet[10109]: E0819 19:03:45.673883   10109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	Aug 19 19:03:46 functional-124593 kubelet[10109]: E0819 19:03:46.774948   10109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-124593?timeout=10s\": dial tcp 192.168.39.22:8441: connect: connection refused" interval="7s"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: I0819 19:03:48.320002   10109 scope.go:117] "RemoveContainer" containerID="e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.320617   10109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-124593_kube-system(c71ff42fdd5902541920b0f91ca1cbbc)\"" pod="kube-system/kube-controller-manager-functional-124593" podUID="c71ff42fdd5902541920b0f91ca1cbbc"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.331567   10109 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 19:03:48 functional-124593 kubelet[10109]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 19:03:48 functional-124593 kubelet[10109]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 19:03:48 functional-124593 kubelet[10109]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 19:03:48 functional-124593 kubelet[10109]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.385433   10109 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094228385106955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.385459   10109 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094228385106955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.752200   10109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/default/events\": dial tcp 192.168.39.22:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-124593.17ed3659059f9af6  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-124593,UID:functional-124593,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-124593 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-124593,},FirstTimestamp:2024-08-19 18:59:48.327103222 +0000 UTC m=+0.349325093,LastTimestamp:2024-08-19 18:59:48.327103222 +0000 UTC m=+0.349325093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:fu
nctional-124593,}"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: I0819 19:03:48.956588   10109 kubelet_node_status.go:72] "Attempting to register node" node="functional-124593"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.957346   10109 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8441/api/v1/nodes\": dial tcp 192.168.39.22:8441: connect: connection refused" node="functional-124593"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:03:51.307756  447822 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-430949/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-124593 -n functional-124593
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-124593 -n functional-124593: exit status 2 (227.481277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-124593" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (1.39s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 kubectl -- --context functional-124593 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-124593 kubectl -- --context functional-124593 get pods: exit status 1 (102.4268ms)

                                                
                                                
** stderr ** 
	E0819 19:04:00.228401  448303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.22:8441/api?timeout=32s\": dial tcp 192.168.39.22:8441: connect: connection refused"
	E0819 19:04:00.230157  448303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.22:8441/api?timeout=32s\": dial tcp 192.168.39.22:8441: connect: connection refused"
	E0819 19:04:00.231756  448303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.22:8441/api?timeout=32s\": dial tcp 192.168.39.22:8441: connect: connection refused"
	E0819 19:04:00.233373  448303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.22:8441/api?timeout=32s\": dial tcp 192.168.39.22:8441: connect: connection refused"
	E0819 19:04:00.234951  448303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.22:8441/api?timeout=32s\": dial tcp 192.168.39.22:8441: connect: connection refused"
	The connection to the server 192.168.39.22:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-linux-amd64 -p functional-124593 kubectl -- --context functional-124593 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-124593 -n functional-124593
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-124593 -n functional-124593: exit status 2 (227.925972ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 logs -n 25
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                    Args                     |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| pause   | nospam-212543 --log_dir                     | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 pause                    |                   |         |         |                     |                     |
	| unpause | nospam-212543 --log_dir                     | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 unpause                  |                   |         |         |                     |                     |
	| unpause | nospam-212543 --log_dir                     | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 unpause                  |                   |         |         |                     |                     |
	| unpause | nospam-212543 --log_dir                     | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 unpause                  |                   |         |         |                     |                     |
	| stop    | nospam-212543 --log_dir                     | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 stop                     |                   |         |         |                     |                     |
	| stop    | nospam-212543 --log_dir                     | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:49 UTC |
	|         | /tmp/nospam-212543 stop                     |                   |         |         |                     |                     |
	| stop    | nospam-212543 --log_dir                     | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:49 UTC | 19 Aug 24 18:49 UTC |
	|         | /tmp/nospam-212543 stop                     |                   |         |         |                     |                     |
	| delete  | -p nospam-212543                            | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:49 UTC | 19 Aug 24 18:49 UTC |
	| start   | -p functional-124593                        | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 18:49 UTC | 19 Aug 24 18:49 UTC |
	|         | --memory=4000                               |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                       |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                    |                   |         |         |                     |                     |
	|         | --container-runtime=crio                    |                   |         |         |                     |                     |
	| start   | -p functional-124593                        | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 18:49 UTC |                     |
	|         | --alsologtostderr -v=8                      |                   |         |         |                     |                     |
	| cache   | functional-124593 cache add                 | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | registry.k8s.io/pause:3.1                   |                   |         |         |                     |                     |
	| cache   | functional-124593 cache add                 | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | registry.k8s.io/pause:3.3                   |                   |         |         |                     |                     |
	| cache   | functional-124593 cache add                 | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| cache   | functional-124593 cache add                 | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | minikube-local-cache-test:functional-124593 |                   |         |         |                     |                     |
	| cache   | functional-124593 cache delete              | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | minikube-local-cache-test:functional-124593 |                   |         |         |                     |                     |
	| cache   | delete                                      | minikube          | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | registry.k8s.io/pause:3.3                   |                   |         |         |                     |                     |
	| cache   | list                                        | minikube          | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	| ssh     | functional-124593 ssh sudo                  | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | crictl images                               |                   |         |         |                     |                     |
	| ssh     | functional-124593                           | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | ssh sudo crictl rmi                         |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| ssh     | functional-124593 ssh                       | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC |                     |
	|         | sudo crictl inspecti                        |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| cache   | functional-124593 cache reload              | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	| ssh     | functional-124593 ssh                       | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:04 UTC |
	|         | sudo crictl inspecti                        |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| cache   | delete                                      | minikube          | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | registry.k8s.io/pause:3.1                   |                   |         |         |                     |                     |
	| cache   | delete                                      | minikube          | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| kubectl | functional-124593 kubectl --                | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | --context functional-124593                 |                   |         |         |                     |                     |
	|         | get pods                                    |                   |         |         |                     |                     |
	|---------|---------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:49:56
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:49:56.790328  444547 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:49:56.790453  444547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:49:56.790459  444547 out.go:358] Setting ErrFile to fd 2...
	I0819 18:49:56.790463  444547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:49:56.790638  444547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 18:49:56.791174  444547 out.go:352] Setting JSON to false
	I0819 18:49:56.792114  444547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9148,"bootTime":1724084249,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:49:56.792181  444547 start.go:139] virtualization: kvm guest
	I0819 18:49:56.794648  444547 out.go:177] * [functional-124593] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:49:56.796256  444547 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 18:49:56.796302  444547 notify.go:220] Checking for updates...
	I0819 18:49:56.799145  444547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:49:56.800604  444547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 18:49:56.802061  444547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 18:49:56.803353  444547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:49:56.804793  444547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:49:56.806582  444547 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:49:56.806680  444547 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 18:49:56.807152  444547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:49:56.807235  444547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:49:56.823439  444547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0819 18:49:56.823898  444547 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:49:56.824445  444547 main.go:141] libmachine: Using API Version  1
	I0819 18:49:56.824484  444547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:49:56.824923  444547 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:49:56.825223  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:49:56.864107  444547 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 18:49:56.865533  444547 start.go:297] selected driver: kvm2
	I0819 18:49:56.865559  444547 start.go:901] validating driver "kvm2" against &{Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:49:56.865676  444547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:49:56.866051  444547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:49:56.866145  444547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:49:56.882415  444547 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:49:56.883177  444547 cni.go:84] Creating CNI manager for ""
	I0819 18:49:56.883193  444547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:49:56.883244  444547 start.go:340] cluster config:
	{Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-124593 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:49:56.883396  444547 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:49:56.885199  444547 out.go:177] * Starting "functional-124593" primary control-plane node in "functional-124593" cluster
	I0819 18:49:56.886649  444547 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:49:56.886699  444547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:49:56.886708  444547 cache.go:56] Caching tarball of preloaded images
	I0819 18:49:56.886828  444547 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:49:56.886844  444547 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:49:56.886977  444547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/config.json ...
	I0819 18:49:56.887255  444547 start.go:360] acquireMachinesLock for functional-124593: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:49:56.887316  444547 start.go:364] duration metric: took 31.483µs to acquireMachinesLock for "functional-124593"
	I0819 18:49:56.887333  444547 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:49:56.887345  444547 fix.go:54] fixHost starting: 
	I0819 18:49:56.887711  444547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:49:56.887765  444547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:49:56.903210  444547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43899
	I0819 18:49:56.903686  444547 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:49:56.904263  444547 main.go:141] libmachine: Using API Version  1
	I0819 18:49:56.904298  444547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:49:56.904680  444547 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:49:56.904935  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:49:56.905158  444547 main.go:141] libmachine: (functional-124593) Calling .GetState
	I0819 18:49:56.906833  444547 fix.go:112] recreateIfNeeded on functional-124593: state=Running err=<nil>
	W0819 18:49:56.906856  444547 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:49:56.908782  444547 out.go:177] * Updating the running kvm2 "functional-124593" VM ...
	I0819 18:49:56.910443  444547 machine.go:93] provisionDockerMachine start ...
	I0819 18:49:56.910478  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:49:56.910823  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:56.913259  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:56.913615  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:56.913638  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:56.913753  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:56.914043  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:56.914207  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:56.914341  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:56.914485  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:56.914684  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:56.914697  444547 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:49:57.017550  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-124593
	
	I0819 18:49:57.017585  444547 main.go:141] libmachine: (functional-124593) Calling .GetMachineName
	I0819 18:49:57.017923  444547 buildroot.go:166] provisioning hostname "functional-124593"
	I0819 18:49:57.017956  444547 main.go:141] libmachine: (functional-124593) Calling .GetMachineName
	I0819 18:49:57.018164  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.021185  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.021551  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.021598  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.021780  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.022011  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.022177  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.022309  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.022452  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:57.022654  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:57.022668  444547 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-124593 && echo "functional-124593" | sudo tee /etc/hostname
	I0819 18:49:57.141478  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-124593
	
	I0819 18:49:57.141514  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.144157  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.144414  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.144449  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.144722  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.144969  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.145192  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.145388  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.145570  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:57.145756  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:57.145776  444547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-124593' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-124593/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-124593' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:49:57.249989  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:49:57.250034  444547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 18:49:57.250086  444547 buildroot.go:174] setting up certificates
	I0819 18:49:57.250099  444547 provision.go:84] configureAuth start
	I0819 18:49:57.250118  444547 main.go:141] libmachine: (functional-124593) Calling .GetMachineName
	I0819 18:49:57.250442  444547 main.go:141] libmachine: (functional-124593) Calling .GetIP
	I0819 18:49:57.253181  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.253490  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.253519  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.253712  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.256213  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.256541  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.256586  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.256752  444547 provision.go:143] copyHostCerts
	I0819 18:49:57.256784  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 18:49:57.256824  444547 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 18:49:57.256848  444547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 18:49:57.256918  444547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 18:49:57.257021  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 18:49:57.257043  444547 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 18:49:57.257048  444547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 18:49:57.257071  444547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 18:49:57.257122  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 18:49:57.257160  444547 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 18:49:57.257176  444547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 18:49:57.257198  444547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 18:49:57.257249  444547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.functional-124593 san=[127.0.0.1 192.168.39.22 functional-124593 localhost minikube]
	I0819 18:49:57.505075  444547 provision.go:177] copyRemoteCerts
	I0819 18:49:57.505163  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:49:57.505194  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.508248  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.508654  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.508690  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.508942  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.509160  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.509381  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.509556  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:49:57.591978  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 18:49:57.592075  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 18:49:57.620626  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 18:49:57.620699  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 18:49:57.646085  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 18:49:57.646168  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 18:49:57.671918  444547 provision.go:87] duration metric: took 421.80001ms to configureAuth
	I0819 18:49:57.671954  444547 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:49:57.672176  444547 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:49:57.672267  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.675054  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.675420  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.675456  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.675667  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.675902  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.676057  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.676211  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.676410  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:57.676596  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:57.676611  444547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:50:03.241286  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:50:03.241321  444547 machine.go:96] duration metric: took 6.330855619s to provisionDockerMachine
	I0819 18:50:03.241334  444547 start.go:293] postStartSetup for "functional-124593" (driver="kvm2")
	I0819 18:50:03.241346  444547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:50:03.241368  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.241892  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:50:03.241919  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.244822  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.245262  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.245291  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.245469  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.245716  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.245889  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.246048  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:50:03.327892  444547 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:50:03.332233  444547 command_runner.go:130] > NAME=Buildroot
	I0819 18:50:03.332262  444547 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 18:50:03.332268  444547 command_runner.go:130] > ID=buildroot
	I0819 18:50:03.332276  444547 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 18:50:03.332284  444547 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 18:50:03.332381  444547 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:50:03.332400  444547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 18:50:03.332476  444547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 18:50:03.332579  444547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 18:50:03.332593  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /etc/ssl/certs/4381592.pem
	I0819 18:50:03.332685  444547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/test/nested/copy/438159/hosts -> hosts in /etc/test/nested/copy/438159
	I0819 18:50:03.332692  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/test/nested/copy/438159/hosts -> /etc/test/nested/copy/438159/hosts
	I0819 18:50:03.332732  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/438159
	I0819 18:50:03.343618  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 18:50:03.367775  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/test/nested/copy/438159/hosts --> /etc/test/nested/copy/438159/hosts (40 bytes)
	I0819 18:50:03.392035  444547 start.go:296] duration metric: took 150.684705ms for postStartSetup
	I0819 18:50:03.392093  444547 fix.go:56] duration metric: took 6.504748451s for fixHost
	I0819 18:50:03.392120  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.394902  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.395203  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.395231  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.395450  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.395682  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.395876  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.396030  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.396215  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:50:03.396420  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:50:03.396434  444547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:50:03.498031  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724093403.488650243
	
	I0819 18:50:03.498062  444547 fix.go:216] guest clock: 1724093403.488650243
	I0819 18:50:03.498069  444547 fix.go:229] Guest: 2024-08-19 18:50:03.488650243 +0000 UTC Remote: 2024-08-19 18:50:03.392098301 +0000 UTC m=+6.637869514 (delta=96.551942ms)
	I0819 18:50:03.498115  444547 fix.go:200] guest clock delta is within tolerance: 96.551942ms
	I0819 18:50:03.498121  444547 start.go:83] releasing machines lock for "functional-124593", held for 6.610795712s
	I0819 18:50:03.498146  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.498456  444547 main.go:141] libmachine: (functional-124593) Calling .GetIP
	I0819 18:50:03.501197  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.501685  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.501717  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.501963  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.502567  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.502825  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.502931  444547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:50:03.502977  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.503104  444547 ssh_runner.go:195] Run: cat /version.json
	I0819 18:50:03.503130  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.505641  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.505904  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.505942  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.505982  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.506089  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.506248  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.506286  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.506326  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.506510  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.506529  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.506705  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.506709  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:50:03.506856  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.507023  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:50:03.596444  444547 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 18:50:03.596676  444547 ssh_runner.go:195] Run: systemctl --version
	I0819 18:50:03.642156  444547 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 18:50:03.642205  444547 command_runner.go:130] > systemd 252 (252)
	I0819 18:50:03.642223  444547 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 18:50:03.642284  444547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:50:04.032467  444547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 18:50:04.057730  444547 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 18:50:04.057919  444547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:50:04.058009  444547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:50:04.094792  444547 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 18:50:04.094824  444547 start.go:495] detecting cgroup driver to use...
	I0819 18:50:04.094892  444547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:50:04.216404  444547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:50:04.250117  444547 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:50:04.250182  444547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:50:04.298450  444547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:50:04.329276  444547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:50:04.576464  444547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:50:04.796403  444547 docker.go:233] disabling docker service ...
	I0819 18:50:04.796509  444547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:50:04.824051  444547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:50:04.841929  444547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:50:05.032450  444547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:50:05.230662  444547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:50:05.261270  444547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:50:05.307751  444547 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0819 18:50:05.308002  444547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:50:05.308071  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.325985  444547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:50:05.326072  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.340857  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.355923  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.368797  444547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:50:05.384107  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.396132  444547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.407497  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.421137  444547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:50:05.431493  444547 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 18:50:05.431832  444547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:50:05.444023  444547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:50:05.610160  444547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:51:35.953940  444547 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.343723561s)
	I0819 18:51:35.953984  444547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:51:35.954042  444547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:51:35.958905  444547 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0819 18:51:35.958943  444547 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0819 18:51:35.958954  444547 command_runner.go:130] > Device: 0,22	Inode: 1653        Links: 1
	I0819 18:51:35.958965  444547 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 18:51:35.958973  444547 command_runner.go:130] > Access: 2024-08-19 18:51:35.749660900 +0000
	I0819 18:51:35.958982  444547 command_runner.go:130] > Modify: 2024-08-19 18:51:35.749660900 +0000
	I0819 18:51:35.958993  444547 command_runner.go:130] > Change: 2024-08-19 18:51:35.749660900 +0000
	I0819 18:51:35.958999  444547 command_runner.go:130] >  Birth: -
	I0819 18:51:35.959026  444547 start.go:563] Will wait 60s for crictl version
	I0819 18:51:35.959080  444547 ssh_runner.go:195] Run: which crictl
	I0819 18:51:35.962908  444547 command_runner.go:130] > /usr/bin/crictl
	I0819 18:51:35.963010  444547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:51:35.995379  444547 command_runner.go:130] > Version:  0.1.0
	I0819 18:51:35.995417  444547 command_runner.go:130] > RuntimeName:  cri-o
	I0819 18:51:35.995425  444547 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0819 18:51:35.995433  444547 command_runner.go:130] > RuntimeApiVersion:  v1
	I0819 18:51:35.996527  444547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:51:35.996626  444547 ssh_runner.go:195] Run: crio --version
	I0819 18:51:36.025037  444547 command_runner.go:130] > crio version 1.29.1
	I0819 18:51:36.025067  444547 command_runner.go:130] > Version:        1.29.1
	I0819 18:51:36.025076  444547 command_runner.go:130] > GitCommit:      unknown
	I0819 18:51:36.025082  444547 command_runner.go:130] > GitCommitDate:  unknown
	I0819 18:51:36.025088  444547 command_runner.go:130] > GitTreeState:   clean
	I0819 18:51:36.025097  444547 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 18:51:36.025103  444547 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 18:51:36.025108  444547 command_runner.go:130] > Compiler:       gc
	I0819 18:51:36.025115  444547 command_runner.go:130] > Platform:       linux/amd64
	I0819 18:51:36.025122  444547 command_runner.go:130] > Linkmode:       dynamic
	I0819 18:51:36.025137  444547 command_runner.go:130] > BuildTags:      
	I0819 18:51:36.025142  444547 command_runner.go:130] >   containers_image_ostree_stub
	I0819 18:51:36.025147  444547 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 18:51:36.025151  444547 command_runner.go:130] >   btrfs_noversion
	I0819 18:51:36.025156  444547 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 18:51:36.025161  444547 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 18:51:36.025169  444547 command_runner.go:130] >   seccomp
	I0819 18:51:36.025175  444547 command_runner.go:130] > LDFlags:          unknown
	I0819 18:51:36.025182  444547 command_runner.go:130] > SeccompEnabled:   true
	I0819 18:51:36.025187  444547 command_runner.go:130] > AppArmorEnabled:  false
	I0819 18:51:36.025256  444547 ssh_runner.go:195] Run: crio --version
	I0819 18:51:36.052216  444547 command_runner.go:130] > crio version 1.29.1
	I0819 18:51:36.052240  444547 command_runner.go:130] > Version:        1.29.1
	I0819 18:51:36.052247  444547 command_runner.go:130] > GitCommit:      unknown
	I0819 18:51:36.052252  444547 command_runner.go:130] > GitCommitDate:  unknown
	I0819 18:51:36.052256  444547 command_runner.go:130] > GitTreeState:   clean
	I0819 18:51:36.052261  444547 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 18:51:36.052266  444547 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 18:51:36.052270  444547 command_runner.go:130] > Compiler:       gc
	I0819 18:51:36.052282  444547 command_runner.go:130] > Platform:       linux/amd64
	I0819 18:51:36.052288  444547 command_runner.go:130] > Linkmode:       dynamic
	I0819 18:51:36.052294  444547 command_runner.go:130] > BuildTags:      
	I0819 18:51:36.052301  444547 command_runner.go:130] >   containers_image_ostree_stub
	I0819 18:51:36.052307  444547 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 18:51:36.052317  444547 command_runner.go:130] >   btrfs_noversion
	I0819 18:51:36.052324  444547 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 18:51:36.052333  444547 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 18:51:36.052338  444547 command_runner.go:130] >   seccomp
	I0819 18:51:36.052345  444547 command_runner.go:130] > LDFlags:          unknown
	I0819 18:51:36.052350  444547 command_runner.go:130] > SeccompEnabled:   true
	I0819 18:51:36.052356  444547 command_runner.go:130] > AppArmorEnabled:  false
	I0819 18:51:36.055292  444547 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:51:36.056598  444547 main.go:141] libmachine: (functional-124593) Calling .GetIP
	I0819 18:51:36.059532  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:51:36.059864  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:51:36.059895  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:51:36.060137  444547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:51:36.064416  444547 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0819 18:51:36.064570  444547 kubeadm.go:883] updating cluster {Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:51:36.064698  444547 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:51:36.064782  444547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:51:36.110239  444547 command_runner.go:130] > {
	I0819 18:51:36.110264  444547 command_runner.go:130] >   "images": [
	I0819 18:51:36.110268  444547 command_runner.go:130] >     {
	I0819 18:51:36.110277  444547 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 18:51:36.110281  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110287  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 18:51:36.110290  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110294  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110303  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 18:51:36.110310  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 18:51:36.110314  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110319  444547 command_runner.go:130] >       "size": "87165492",
	I0819 18:51:36.110324  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.110330  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110343  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110350  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110359  444547 command_runner.go:130] >     },
	I0819 18:51:36.110364  444547 command_runner.go:130] >     {
	I0819 18:51:36.110373  444547 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 18:51:36.110391  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110399  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 18:51:36.110402  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110406  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110414  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 18:51:36.110425  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 18:51:36.110432  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110443  444547 command_runner.go:130] >       "size": "31470524",
	I0819 18:51:36.110453  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.110461  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110468  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110477  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110483  444547 command_runner.go:130] >     },
	I0819 18:51:36.110502  444547 command_runner.go:130] >     {
	I0819 18:51:36.110513  444547 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 18:51:36.110522  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110533  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 18:51:36.110539  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110549  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110563  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 18:51:36.110577  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 18:51:36.110586  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110594  444547 command_runner.go:130] >       "size": "61245718",
	I0819 18:51:36.110601  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.110611  444547 command_runner.go:130] >       "username": "nonroot",
	I0819 18:51:36.110621  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110631  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110637  444547 command_runner.go:130] >     },
	I0819 18:51:36.110645  444547 command_runner.go:130] >     {
	I0819 18:51:36.110658  444547 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 18:51:36.110668  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110677  444547 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 18:51:36.110684  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110701  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110715  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 18:51:36.110733  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 18:51:36.110742  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110753  444547 command_runner.go:130] >       "size": "149009664",
	I0819 18:51:36.110760  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.110764  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.110770  444547 command_runner.go:130] >       },
	I0819 18:51:36.110777  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110787  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110797  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110805  444547 command_runner.go:130] >     },
	I0819 18:51:36.110814  444547 command_runner.go:130] >     {
	I0819 18:51:36.110823  444547 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 18:51:36.110832  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110842  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 18:51:36.110849  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110853  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110868  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 18:51:36.110884  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 18:51:36.110893  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110901  444547 command_runner.go:130] >       "size": "95233506",
	I0819 18:51:36.110909  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.110918  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.110927  444547 command_runner.go:130] >       },
	I0819 18:51:36.110934  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110939  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110947  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110956  444547 command_runner.go:130] >     },
	I0819 18:51:36.110965  444547 command_runner.go:130] >     {
	I0819 18:51:36.110978  444547 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 18:51:36.110988  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110999  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 18:51:36.111007  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111013  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111025  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 18:51:36.111040  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 18:51:36.111049  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111060  444547 command_runner.go:130] >       "size": "89437512",
	I0819 18:51:36.111070  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.111080  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.111089  444547 command_runner.go:130] >       },
	I0819 18:51:36.111096  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111104  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111114  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.111122  444547 command_runner.go:130] >     },
	I0819 18:51:36.111128  444547 command_runner.go:130] >     {
	I0819 18:51:36.111140  444547 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 18:51:36.111148  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.111154  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 18:51:36.111163  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111170  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111185  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 18:51:36.111199  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 18:51:36.111206  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111213  444547 command_runner.go:130] >       "size": "92728217",
	I0819 18:51:36.111223  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.111230  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111239  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111246  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.111254  444547 command_runner.go:130] >     },
	I0819 18:51:36.111267  444547 command_runner.go:130] >     {
	I0819 18:51:36.111281  444547 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 18:51:36.111290  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.111299  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 18:51:36.111307  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111313  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111333  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 18:51:36.111345  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 18:51:36.111351  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111355  444547 command_runner.go:130] >       "size": "68420936",
	I0819 18:51:36.111361  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.111365  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.111370  444547 command_runner.go:130] >       },
	I0819 18:51:36.111374  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111381  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111385  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.111389  444547 command_runner.go:130] >     },
	I0819 18:51:36.111393  444547 command_runner.go:130] >     {
	I0819 18:51:36.111399  444547 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 18:51:36.111405  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.111410  444547 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 18:51:36.111415  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111420  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111429  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 18:51:36.111438  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 18:51:36.111442  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111448  444547 command_runner.go:130] >       "size": "742080",
	I0819 18:51:36.111452  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.111456  444547 command_runner.go:130] >         "value": "65535"
	I0819 18:51:36.111460  444547 command_runner.go:130] >       },
	I0819 18:51:36.111464  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111480  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111486  444547 command_runner.go:130] >       "pinned": true
	I0819 18:51:36.111494  444547 command_runner.go:130] >     }
	I0819 18:51:36.111502  444547 command_runner.go:130] >   ]
	I0819 18:51:36.111507  444547 command_runner.go:130] > }
	I0819 18:51:36.111701  444547 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:51:36.111714  444547 crio.go:433] Images already preloaded, skipping extraction
	I0819 18:51:36.111767  444547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:51:36.143806  444547 command_runner.go:130] > {
	I0819 18:51:36.143831  444547 command_runner.go:130] >   "images": [
	I0819 18:51:36.143835  444547 command_runner.go:130] >     {
	I0819 18:51:36.143843  444547 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 18:51:36.143848  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.143854  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 18:51:36.143857  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143861  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.143870  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 18:51:36.143877  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 18:51:36.143883  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143887  444547 command_runner.go:130] >       "size": "87165492",
	I0819 18:51:36.143891  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.143898  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.143904  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.143909  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.143912  444547 command_runner.go:130] >     },
	I0819 18:51:36.143916  444547 command_runner.go:130] >     {
	I0819 18:51:36.143922  444547 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 18:51:36.143929  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.143934  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 18:51:36.143939  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143943  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.143953  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 18:51:36.143960  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 18:51:36.143967  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143978  444547 command_runner.go:130] >       "size": "31470524",
	I0819 18:51:36.143984  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.143992  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144001  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144007  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144016  444547 command_runner.go:130] >     },
	I0819 18:51:36.144021  444547 command_runner.go:130] >     {
	I0819 18:51:36.144036  444547 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 18:51:36.144043  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144048  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 18:51:36.144054  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144058  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144067  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 18:51:36.144085  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 18:51:36.144093  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144100  444547 command_runner.go:130] >       "size": "61245718",
	I0819 18:51:36.144109  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.144119  444547 command_runner.go:130] >       "username": "nonroot",
	I0819 18:51:36.144126  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144134  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144138  444547 command_runner.go:130] >     },
	I0819 18:51:36.144142  444547 command_runner.go:130] >     {
	I0819 18:51:36.144148  444547 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 18:51:36.144154  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144159  444547 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 18:51:36.144162  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144165  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144172  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 18:51:36.144188  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 18:51:36.144197  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144204  444547 command_runner.go:130] >       "size": "149009664",
	I0819 18:51:36.144213  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144220  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144227  444547 command_runner.go:130] >       },
	I0819 18:51:36.144231  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144237  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144243  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144249  444547 command_runner.go:130] >     },
	I0819 18:51:36.144252  444547 command_runner.go:130] >     {
	I0819 18:51:36.144259  444547 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 18:51:36.144267  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144276  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 18:51:36.144285  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144291  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144305  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 18:51:36.144320  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 18:51:36.144327  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144333  444547 command_runner.go:130] >       "size": "95233506",
	I0819 18:51:36.144337  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144341  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144347  444547 command_runner.go:130] >       },
	I0819 18:51:36.144352  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144358  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144365  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144374  444547 command_runner.go:130] >     },
	I0819 18:51:36.144380  444547 command_runner.go:130] >     {
	I0819 18:51:36.144389  444547 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 18:51:36.144399  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144408  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 18:51:36.144419  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144427  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144435  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 18:51:36.144449  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 18:51:36.144471  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144501  444547 command_runner.go:130] >       "size": "89437512",
	I0819 18:51:36.144507  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144516  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144521  444547 command_runner.go:130] >       },
	I0819 18:51:36.144526  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144532  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144541  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144547  444547 command_runner.go:130] >     },
	I0819 18:51:36.144558  444547 command_runner.go:130] >     {
	I0819 18:51:36.144568  444547 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 18:51:36.144577  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144585  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 18:51:36.144593  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144600  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144611  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 18:51:36.144623  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 18:51:36.144632  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144640  444547 command_runner.go:130] >       "size": "92728217",
	I0819 18:51:36.144649  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.144656  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144663  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144669  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144677  444547 command_runner.go:130] >     },
	I0819 18:51:36.144682  444547 command_runner.go:130] >     {
	I0819 18:51:36.144694  444547 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 18:51:36.144704  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144716  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 18:51:36.144725  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144734  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144755  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 18:51:36.144768  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 18:51:36.144775  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144780  444547 command_runner.go:130] >       "size": "68420936",
	I0819 18:51:36.144789  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144798  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144807  444547 command_runner.go:130] >       },
	I0819 18:51:36.144816  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144826  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144835  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144843  444547 command_runner.go:130] >     },
	I0819 18:51:36.144849  444547 command_runner.go:130] >     {
	I0819 18:51:36.144864  444547 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 18:51:36.144873  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144882  444547 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 18:51:36.144892  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144901  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144912  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 18:51:36.144926  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 18:51:36.144934  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144940  444547 command_runner.go:130] >       "size": "742080",
	I0819 18:51:36.144944  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144950  444547 command_runner.go:130] >         "value": "65535"
	I0819 18:51:36.144958  444547 command_runner.go:130] >       },
	I0819 18:51:36.144968  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144979  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144988  444547 command_runner.go:130] >       "pinned": true
	I0819 18:51:36.144995  444547 command_runner.go:130] >     }
	I0819 18:51:36.145001  444547 command_runner.go:130] >   ]
	I0819 18:51:36.145008  444547 command_runner.go:130] > }
	I0819 18:51:36.145182  444547 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:51:36.145198  444547 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:51:36.145207  444547 kubeadm.go:934] updating node { 192.168.39.22 8441 v1.31.0 crio true true} ...
	I0819 18:51:36.145347  444547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-124593 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:51:36.145440  444547 ssh_runner.go:195] Run: crio config
	I0819 18:51:36.185689  444547 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0819 18:51:36.185722  444547 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0819 18:51:36.185733  444547 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0819 18:51:36.185738  444547 command_runner.go:130] > #
	I0819 18:51:36.185763  444547 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0819 18:51:36.185772  444547 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0819 18:51:36.185782  444547 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0819 18:51:36.185794  444547 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0819 18:51:36.185800  444547 command_runner.go:130] > # reload'.
	I0819 18:51:36.185810  444547 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0819 18:51:36.185824  444547 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0819 18:51:36.185834  444547 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0819 18:51:36.185851  444547 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0819 18:51:36.185857  444547 command_runner.go:130] > [crio]
	I0819 18:51:36.185867  444547 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0819 18:51:36.185878  444547 command_runner.go:130] > # containers images, in this directory.
	I0819 18:51:36.185886  444547 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0819 18:51:36.185906  444547 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0819 18:51:36.185916  444547 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0819 18:51:36.185927  444547 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0819 18:51:36.185937  444547 command_runner.go:130] > # imagestore = ""
	I0819 18:51:36.185947  444547 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0819 18:51:36.185960  444547 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0819 18:51:36.185968  444547 command_runner.go:130] > storage_driver = "overlay"
	I0819 18:51:36.185979  444547 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0819 18:51:36.185990  444547 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0819 18:51:36.186001  444547 command_runner.go:130] > storage_option = [
	I0819 18:51:36.186010  444547 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0819 18:51:36.186018  444547 command_runner.go:130] > ]
	I0819 18:51:36.186029  444547 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0819 18:51:36.186041  444547 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0819 18:51:36.186052  444547 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0819 18:51:36.186068  444547 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0819 18:51:36.186082  444547 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0819 18:51:36.186092  444547 command_runner.go:130] > # always happen on a node reboot
	I0819 18:51:36.186103  444547 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0819 18:51:36.186124  444547 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0819 18:51:36.186136  444547 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0819 18:51:36.186147  444547 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0819 18:51:36.186155  444547 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0819 18:51:36.186168  444547 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0819 18:51:36.186183  444547 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0819 18:51:36.186193  444547 command_runner.go:130] > # internal_wipe = true
	I0819 18:51:36.186206  444547 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0819 18:51:36.186217  444547 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0819 18:51:36.186227  444547 command_runner.go:130] > # internal_repair = false
	I0819 18:51:36.186235  444547 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0819 18:51:36.186247  444547 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0819 18:51:36.186256  444547 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0819 18:51:36.186268  444547 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0819 18:51:36.186303  444547 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0819 18:51:36.186317  444547 command_runner.go:130] > [crio.api]
	I0819 18:51:36.186326  444547 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0819 18:51:36.186333  444547 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0819 18:51:36.186342  444547 command_runner.go:130] > # IP address on which the stream server will listen.
	I0819 18:51:36.186353  444547 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0819 18:51:36.186363  444547 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0819 18:51:36.186374  444547 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0819 18:51:36.186386  444547 command_runner.go:130] > # stream_port = "0"
	I0819 18:51:36.186395  444547 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0819 18:51:36.186402  444547 command_runner.go:130] > # stream_enable_tls = false
	I0819 18:51:36.186409  444547 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0819 18:51:36.186418  444547 command_runner.go:130] > # stream_idle_timeout = ""
	I0819 18:51:36.186429  444547 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0819 18:51:36.186441  444547 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0819 18:51:36.186450  444547 command_runner.go:130] > # minutes.
	I0819 18:51:36.186457  444547 command_runner.go:130] > # stream_tls_cert = ""
	I0819 18:51:36.186468  444547 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0819 18:51:36.186486  444547 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0819 18:51:36.186498  444547 command_runner.go:130] > # stream_tls_key = ""
	I0819 18:51:36.186511  444547 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0819 18:51:36.186523  444547 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0819 18:51:36.186547  444547 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0819 18:51:36.186556  444547 command_runner.go:130] > # stream_tls_ca = ""
	I0819 18:51:36.186567  444547 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 18:51:36.186578  444547 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0819 18:51:36.186589  444547 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 18:51:36.186600  444547 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0819 18:51:36.186610  444547 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0819 18:51:36.186622  444547 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0819 18:51:36.186629  444547 command_runner.go:130] > [crio.runtime]
	I0819 18:51:36.186639  444547 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0819 18:51:36.186650  444547 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0819 18:51:36.186659  444547 command_runner.go:130] > # "nofile=1024:2048"
	I0819 18:51:36.186670  444547 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0819 18:51:36.186674  444547 command_runner.go:130] > # default_ulimits = [
	I0819 18:51:36.186678  444547 command_runner.go:130] > # ]
	I0819 18:51:36.186687  444547 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0819 18:51:36.186701  444547 command_runner.go:130] > # no_pivot = false
	I0819 18:51:36.186714  444547 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0819 18:51:36.186727  444547 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0819 18:51:36.186738  444547 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0819 18:51:36.186747  444547 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0819 18:51:36.186758  444547 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0819 18:51:36.186773  444547 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 18:51:36.186783  444547 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0819 18:51:36.186791  444547 command_runner.go:130] > # Cgroup setting for conmon
	I0819 18:51:36.186805  444547 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0819 18:51:36.186814  444547 command_runner.go:130] > conmon_cgroup = "pod"
	I0819 18:51:36.186824  444547 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0819 18:51:36.186834  444547 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0819 18:51:36.186845  444547 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 18:51:36.186855  444547 command_runner.go:130] > conmon_env = [
	I0819 18:51:36.186864  444547 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 18:51:36.186872  444547 command_runner.go:130] > ]
	I0819 18:51:36.186881  444547 command_runner.go:130] > # Additional environment variables to set for all the
	I0819 18:51:36.186891  444547 command_runner.go:130] > # containers. These are overridden if set in the
	I0819 18:51:36.186902  444547 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0819 18:51:36.186911  444547 command_runner.go:130] > # default_env = [
	I0819 18:51:36.186916  444547 command_runner.go:130] > # ]
	I0819 18:51:36.186957  444547 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0819 18:51:36.186977  444547 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0819 18:51:36.186983  444547 command_runner.go:130] > # selinux = false
	I0819 18:51:36.186992  444547 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0819 18:51:36.187004  444547 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0819 18:51:36.187019  444547 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0819 18:51:36.187029  444547 command_runner.go:130] > # seccomp_profile = ""
	I0819 18:51:36.187038  444547 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0819 18:51:36.187049  444547 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0819 18:51:36.187059  444547 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0819 18:51:36.187069  444547 command_runner.go:130] > # which might increase security.
	I0819 18:51:36.187074  444547 command_runner.go:130] > # This option is currently deprecated,
	I0819 18:51:36.187084  444547 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0819 18:51:36.187095  444547 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0819 18:51:36.187107  444547 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0819 18:51:36.187127  444547 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0819 18:51:36.187139  444547 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0819 18:51:36.187152  444547 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0819 18:51:36.187160  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.187167  444547 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0819 18:51:36.187178  444547 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0819 18:51:36.187188  444547 command_runner.go:130] > # the cgroup blockio controller.
	I0819 18:51:36.187200  444547 command_runner.go:130] > # blockio_config_file = ""
	I0819 18:51:36.187214  444547 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0819 18:51:36.187224  444547 command_runner.go:130] > # blockio parameters.
	I0819 18:51:36.187231  444547 command_runner.go:130] > # blockio_reload = false
	I0819 18:51:36.187241  444547 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0819 18:51:36.187250  444547 command_runner.go:130] > # irqbalance daemon.
	I0819 18:51:36.187259  444547 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0819 18:51:36.187271  444547 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0819 18:51:36.187285  444547 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0819 18:51:36.187297  444547 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0819 18:51:36.187309  444547 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0819 18:51:36.187322  444547 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0819 18:51:36.187332  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.187344  444547 command_runner.go:130] > # rdt_config_file = ""
	I0819 18:51:36.187353  444547 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0819 18:51:36.187363  444547 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0819 18:51:36.187390  444547 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0819 18:51:36.187400  444547 command_runner.go:130] > # separate_pull_cgroup = ""
	I0819 18:51:36.187410  444547 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0819 18:51:36.187425  444547 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0819 18:51:36.187435  444547 command_runner.go:130] > # will be added.
	I0819 18:51:36.187442  444547 command_runner.go:130] > # default_capabilities = [
	I0819 18:51:36.187451  444547 command_runner.go:130] > # 	"CHOWN",
	I0819 18:51:36.187458  444547 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0819 18:51:36.187466  444547 command_runner.go:130] > # 	"FSETID",
	I0819 18:51:36.187476  444547 command_runner.go:130] > # 	"FOWNER",
	I0819 18:51:36.187484  444547 command_runner.go:130] > # 	"SETGID",
	I0819 18:51:36.187490  444547 command_runner.go:130] > # 	"SETUID",
	I0819 18:51:36.187499  444547 command_runner.go:130] > # 	"SETPCAP",
	I0819 18:51:36.187506  444547 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0819 18:51:36.187516  444547 command_runner.go:130] > # 	"KILL",
	I0819 18:51:36.187521  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187536  444547 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0819 18:51:36.187549  444547 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0819 18:51:36.187564  444547 command_runner.go:130] > # add_inheritable_capabilities = false
	I0819 18:51:36.187577  444547 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0819 18:51:36.187588  444547 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 18:51:36.187595  444547 command_runner.go:130] > default_sysctls = [
	I0819 18:51:36.187599  444547 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0819 18:51:36.187602  444547 command_runner.go:130] > ]
	I0819 18:51:36.187607  444547 command_runner.go:130] > # List of devices on the host that a
	I0819 18:51:36.187613  444547 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0819 18:51:36.187617  444547 command_runner.go:130] > # allowed_devices = [
	I0819 18:51:36.187621  444547 command_runner.go:130] > # 	"/dev/fuse",
	I0819 18:51:36.187626  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187637  444547 command_runner.go:130] > # List of additional devices. specified as
	I0819 18:51:36.187650  444547 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0819 18:51:36.187663  444547 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0819 18:51:36.187675  444547 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 18:51:36.187685  444547 command_runner.go:130] > # additional_devices = [
	I0819 18:51:36.187690  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187699  444547 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0819 18:51:36.187703  444547 command_runner.go:130] > # cdi_spec_dirs = [
	I0819 18:51:36.187707  444547 command_runner.go:130] > # 	"/etc/cdi",
	I0819 18:51:36.187711  444547 command_runner.go:130] > # 	"/var/run/cdi",
	I0819 18:51:36.187715  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187721  444547 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0819 18:51:36.187729  444547 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0819 18:51:36.187735  444547 command_runner.go:130] > # Defaults to false.
	I0819 18:51:36.187739  444547 command_runner.go:130] > # device_ownership_from_security_context = false
	I0819 18:51:36.187746  444547 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0819 18:51:36.187753  444547 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0819 18:51:36.187756  444547 command_runner.go:130] > # hooks_dir = [
	I0819 18:51:36.187761  444547 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0819 18:51:36.187766  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187775  444547 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0819 18:51:36.187788  444547 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0819 18:51:36.187800  444547 command_runner.go:130] > # its default mounts from the following two files:
	I0819 18:51:36.187808  444547 command_runner.go:130] > #
	I0819 18:51:36.187819  444547 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0819 18:51:36.187831  444547 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0819 18:51:36.187841  444547 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0819 18:51:36.187846  444547 command_runner.go:130] > #
	I0819 18:51:36.187856  444547 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0819 18:51:36.187870  444547 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0819 18:51:36.187887  444547 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0819 18:51:36.187899  444547 command_runner.go:130] > #      only add mounts it finds in this file.
	I0819 18:51:36.187907  444547 command_runner.go:130] > #
	I0819 18:51:36.187915  444547 command_runner.go:130] > # default_mounts_file = ""
	I0819 18:51:36.187927  444547 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0819 18:51:36.187940  444547 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0819 18:51:36.187948  444547 command_runner.go:130] > pids_limit = 1024
	I0819 18:51:36.187961  444547 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0819 18:51:36.187976  444547 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0819 18:51:36.187989  444547 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0819 18:51:36.188004  444547 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0819 18:51:36.188020  444547 command_runner.go:130] > # log_size_max = -1
	I0819 18:51:36.188034  444547 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0819 18:51:36.188043  444547 command_runner.go:130] > # log_to_journald = false
	I0819 18:51:36.188053  444547 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0819 18:51:36.188064  444547 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0819 18:51:36.188076  444547 command_runner.go:130] > # Path to directory for container attach sockets.
	I0819 18:51:36.188084  444547 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0819 18:51:36.188095  444547 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0819 18:51:36.188103  444547 command_runner.go:130] > # bind_mount_prefix = ""
	I0819 18:51:36.188113  444547 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0819 18:51:36.188123  444547 command_runner.go:130] > # read_only = false
	I0819 18:51:36.188133  444547 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0819 18:51:36.188144  444547 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0819 18:51:36.188151  444547 command_runner.go:130] > # live configuration reload.
	I0819 18:51:36.188161  444547 command_runner.go:130] > # log_level = "info"
	I0819 18:51:36.188171  444547 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0819 18:51:36.188182  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.188190  444547 command_runner.go:130] > # log_filter = ""
	I0819 18:51:36.188199  444547 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0819 18:51:36.188216  444547 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0819 18:51:36.188225  444547 command_runner.go:130] > # separated by comma.
	I0819 18:51:36.188237  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188247  444547 command_runner.go:130] > # uid_mappings = ""
	I0819 18:51:36.188257  444547 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0819 18:51:36.188269  444547 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0819 18:51:36.188278  444547 command_runner.go:130] > # separated by comma.
	I0819 18:51:36.188293  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188303  444547 command_runner.go:130] > # gid_mappings = ""
	I0819 18:51:36.188313  444547 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0819 18:51:36.188325  444547 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 18:51:36.188337  444547 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 18:51:36.188351  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188359  444547 command_runner.go:130] > # minimum_mappable_uid = -1
	I0819 18:51:36.188366  444547 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0819 18:51:36.188375  444547 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 18:51:36.188381  444547 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 18:51:36.188390  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188394  444547 command_runner.go:130] > # minimum_mappable_gid = -1
	I0819 18:51:36.188402  444547 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0819 18:51:36.188408  444547 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0819 18:51:36.188415  444547 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0819 18:51:36.188419  444547 command_runner.go:130] > # ctr_stop_timeout = 30
	I0819 18:51:36.188424  444547 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0819 18:51:36.188430  444547 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0819 18:51:36.188437  444547 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0819 18:51:36.188441  444547 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0819 18:51:36.188445  444547 command_runner.go:130] > drop_infra_ctr = false
	I0819 18:51:36.188451  444547 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0819 18:51:36.188458  444547 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0819 18:51:36.188465  444547 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0819 18:51:36.188471  444547 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0819 18:51:36.188482  444547 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0819 18:51:36.188489  444547 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0819 18:51:36.188495  444547 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0819 18:51:36.188502  444547 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0819 18:51:36.188506  444547 command_runner.go:130] > # shared_cpuset = ""
	I0819 18:51:36.188514  444547 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0819 18:51:36.188519  444547 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0819 18:51:36.188524  444547 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0819 18:51:36.188531  444547 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0819 18:51:36.188537  444547 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0819 18:51:36.188549  444547 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0819 18:51:36.188561  444547 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0819 18:51:36.188571  444547 command_runner.go:130] > # enable_criu_support = false
	I0819 18:51:36.188579  444547 command_runner.go:130] > # Enable/disable the generation of the container,
	I0819 18:51:36.188591  444547 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0819 18:51:36.188598  444547 command_runner.go:130] > # enable_pod_events = false
	I0819 18:51:36.188604  444547 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 18:51:36.188613  444547 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 18:51:36.188620  444547 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0819 18:51:36.188626  444547 command_runner.go:130] > # default_runtime = "runc"
	I0819 18:51:36.188631  444547 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0819 18:51:36.188638  444547 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0819 18:51:36.188649  444547 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0819 18:51:36.188656  444547 command_runner.go:130] > # creation as a file is not desired either.
	I0819 18:51:36.188664  444547 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0819 18:51:36.188671  444547 command_runner.go:130] > # the hostname is being managed dynamically.
	I0819 18:51:36.188675  444547 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0819 18:51:36.188681  444547 command_runner.go:130] > # ]
	I0819 18:51:36.188686  444547 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0819 18:51:36.188694  444547 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0819 18:51:36.188700  444547 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0819 18:51:36.188708  444547 command_runner.go:130] > # Each entry in the table should follow the format:
	I0819 18:51:36.188711  444547 command_runner.go:130] > #
	I0819 18:51:36.188716  444547 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0819 18:51:36.188720  444547 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0819 18:51:36.188744  444547 command_runner.go:130] > # runtime_type = "oci"
	I0819 18:51:36.188752  444547 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0819 18:51:36.188757  444547 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0819 18:51:36.188763  444547 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0819 18:51:36.188768  444547 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0819 18:51:36.188774  444547 command_runner.go:130] > # monitor_env = []
	I0819 18:51:36.188778  444547 command_runner.go:130] > # privileged_without_host_devices = false
	I0819 18:51:36.188782  444547 command_runner.go:130] > # allowed_annotations = []
	I0819 18:51:36.188790  444547 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0819 18:51:36.188795  444547 command_runner.go:130] > # Where:
	I0819 18:51:36.188800  444547 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0819 18:51:36.188806  444547 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0819 18:51:36.188813  444547 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0819 18:51:36.188822  444547 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0819 18:51:36.188828  444547 command_runner.go:130] > #   in $PATH.
	I0819 18:51:36.188834  444547 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0819 18:51:36.188839  444547 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0819 18:51:36.188845  444547 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0819 18:51:36.188851  444547 command_runner.go:130] > #   state.
	I0819 18:51:36.188858  444547 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0819 18:51:36.188865  444547 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0819 18:51:36.188871  444547 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0819 18:51:36.188879  444547 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0819 18:51:36.188885  444547 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0819 18:51:36.188893  444547 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0819 18:51:36.188898  444547 command_runner.go:130] > #   The currently recognized values are:
	I0819 18:51:36.188904  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0819 18:51:36.188911  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0819 18:51:36.188917  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0819 18:51:36.188925  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0819 18:51:36.188934  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0819 18:51:36.188940  444547 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0819 18:51:36.188948  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0819 18:51:36.188954  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0819 18:51:36.188962  444547 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0819 18:51:36.188968  444547 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0819 18:51:36.188972  444547 command_runner.go:130] > #   deprecated option "conmon".
	I0819 18:51:36.188979  444547 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0819 18:51:36.188985  444547 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0819 18:51:36.188992  444547 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0819 18:51:36.188998  444547 command_runner.go:130] > #   should be moved to the container's cgroup
	I0819 18:51:36.189006  444547 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0819 18:51:36.189013  444547 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0819 18:51:36.189019  444547 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0819 18:51:36.189026  444547 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0819 18:51:36.189031  444547 command_runner.go:130] > #
	I0819 18:51:36.189041  444547 command_runner.go:130] > # Using the seccomp notifier feature:
	I0819 18:51:36.189044  444547 command_runner.go:130] > #
	I0819 18:51:36.189051  444547 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0819 18:51:36.189058  444547 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0819 18:51:36.189062  444547 command_runner.go:130] > #
	I0819 18:51:36.189070  444547 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0819 18:51:36.189078  444547 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0819 18:51:36.189082  444547 command_runner.go:130] > #
	I0819 18:51:36.189089  444547 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0819 18:51:36.189095  444547 command_runner.go:130] > # feature.
	I0819 18:51:36.189100  444547 command_runner.go:130] > #
	I0819 18:51:36.189106  444547 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0819 18:51:36.189114  444547 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0819 18:51:36.189120  444547 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0819 18:51:36.189127  444547 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0819 18:51:36.189146  444547 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0819 18:51:36.189154  444547 command_runner.go:130] > #
	I0819 18:51:36.189163  444547 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0819 18:51:36.189174  444547 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0819 18:51:36.189178  444547 command_runner.go:130] > #
	I0819 18:51:36.189184  444547 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0819 18:51:36.189192  444547 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0819 18:51:36.189195  444547 command_runner.go:130] > #
	I0819 18:51:36.189203  444547 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0819 18:51:36.189209  444547 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0819 18:51:36.189214  444547 command_runner.go:130] > # limitation.
	I0819 18:51:36.189220  444547 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0819 18:51:36.189226  444547 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0819 18:51:36.189230  444547 command_runner.go:130] > runtime_type = "oci"
	I0819 18:51:36.189234  444547 command_runner.go:130] > runtime_root = "/run/runc"
	I0819 18:51:36.189240  444547 command_runner.go:130] > runtime_config_path = ""
	I0819 18:51:36.189244  444547 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0819 18:51:36.189248  444547 command_runner.go:130] > monitor_cgroup = "pod"
	I0819 18:51:36.189252  444547 command_runner.go:130] > monitor_exec_cgroup = ""
	I0819 18:51:36.189256  444547 command_runner.go:130] > monitor_env = [
	I0819 18:51:36.189261  444547 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 18:51:36.189266  444547 command_runner.go:130] > ]
	I0819 18:51:36.189270  444547 command_runner.go:130] > privileged_without_host_devices = false
	I0819 18:51:36.189278  444547 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0819 18:51:36.189283  444547 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0819 18:51:36.189291  444547 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0819 18:51:36.189302  444547 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0819 18:51:36.189311  444547 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0819 18:51:36.189317  444547 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0819 18:51:36.189328  444547 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0819 18:51:36.189339  444547 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0819 18:51:36.189346  444547 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0819 18:51:36.189353  444547 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0819 18:51:36.189358  444547 command_runner.go:130] > # Example:
	I0819 18:51:36.189363  444547 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0819 18:51:36.189370  444547 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0819 18:51:36.189374  444547 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0819 18:51:36.189382  444547 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0819 18:51:36.189386  444547 command_runner.go:130] > # cpuset = 0
	I0819 18:51:36.189393  444547 command_runner.go:130] > # cpushares = "0-1"
	I0819 18:51:36.189396  444547 command_runner.go:130] > # Where:
	I0819 18:51:36.189401  444547 command_runner.go:130] > # The workload name is workload-type.
	I0819 18:51:36.189409  444547 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0819 18:51:36.189415  444547 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0819 18:51:36.189422  444547 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0819 18:51:36.189430  444547 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0819 18:51:36.189437  444547 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0819 18:51:36.189442  444547 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0819 18:51:36.189449  444547 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0819 18:51:36.189455  444547 command_runner.go:130] > # Default value is set to true
	I0819 18:51:36.189459  444547 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0819 18:51:36.189469  444547 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0819 18:51:36.189478  444547 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0819 18:51:36.189484  444547 command_runner.go:130] > # Default value is set to 'false'
	I0819 18:51:36.189489  444547 command_runner.go:130] > # disable_hostport_mapping = false
	I0819 18:51:36.189497  444547 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0819 18:51:36.189500  444547 command_runner.go:130] > #
	I0819 18:51:36.189505  444547 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0819 18:51:36.189513  444547 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0819 18:51:36.189519  444547 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0819 18:51:36.189528  444547 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0819 18:51:36.189536  444547 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0819 18:51:36.189542  444547 command_runner.go:130] > [crio.image]
	I0819 18:51:36.189548  444547 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0819 18:51:36.189554  444547 command_runner.go:130] > # default_transport = "docker://"
	I0819 18:51:36.189560  444547 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0819 18:51:36.189569  444547 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0819 18:51:36.189574  444547 command_runner.go:130] > # global_auth_file = ""
	I0819 18:51:36.189578  444547 command_runner.go:130] > # The image used to instantiate infra containers.
	I0819 18:51:36.189583  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.189590  444547 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0819 18:51:36.189596  444547 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0819 18:51:36.189604  444547 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0819 18:51:36.189609  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.189615  444547 command_runner.go:130] > # pause_image_auth_file = ""
	I0819 18:51:36.189620  444547 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0819 18:51:36.189626  444547 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0819 18:51:36.189632  444547 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0819 18:51:36.189639  444547 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0819 18:51:36.189643  444547 command_runner.go:130] > # pause_command = "/pause"
	I0819 18:51:36.189649  444547 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0819 18:51:36.189655  444547 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0819 18:51:36.189660  444547 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0819 18:51:36.189670  444547 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0819 18:51:36.189678  444547 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0819 18:51:36.189684  444547 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0819 18:51:36.189690  444547 command_runner.go:130] > # pinned_images = [
	I0819 18:51:36.189693  444547 command_runner.go:130] > # ]
	I0819 18:51:36.189700  444547 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0819 18:51:36.189707  444547 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0819 18:51:36.189713  444547 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0819 18:51:36.189721  444547 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0819 18:51:36.189726  444547 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0819 18:51:36.189732  444547 command_runner.go:130] > # signature_policy = ""
	I0819 18:51:36.189737  444547 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0819 18:51:36.189744  444547 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0819 18:51:36.189754  444547 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0819 18:51:36.189762  444547 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0819 18:51:36.189770  444547 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0819 18:51:36.189775  444547 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0819 18:51:36.189781  444547 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0819 18:51:36.189786  444547 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0819 18:51:36.189791  444547 command_runner.go:130] > # changing them here.
	I0819 18:51:36.189795  444547 command_runner.go:130] > # insecure_registries = [
	I0819 18:51:36.189798  444547 command_runner.go:130] > # ]
	I0819 18:51:36.189804  444547 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0819 18:51:36.189808  444547 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0819 18:51:36.189812  444547 command_runner.go:130] > # image_volumes = "mkdir"
	I0819 18:51:36.189816  444547 command_runner.go:130] > # Temporary directory to use for storing big files
	I0819 18:51:36.189820  444547 command_runner.go:130] > # big_files_temporary_dir = ""
	I0819 18:51:36.189826  444547 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0819 18:51:36.189829  444547 command_runner.go:130] > # CNI plugins.
	I0819 18:51:36.189832  444547 command_runner.go:130] > [crio.network]
	I0819 18:51:36.189838  444547 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0819 18:51:36.189842  444547 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0819 18:51:36.189847  444547 command_runner.go:130] > # cni_default_network = ""
	I0819 18:51:36.189851  444547 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0819 18:51:36.189855  444547 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0819 18:51:36.189860  444547 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0819 18:51:36.189863  444547 command_runner.go:130] > # plugin_dirs = [
	I0819 18:51:36.189867  444547 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0819 18:51:36.189870  444547 command_runner.go:130] > # ]
	I0819 18:51:36.189875  444547 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0819 18:51:36.189879  444547 command_runner.go:130] > [crio.metrics]
	I0819 18:51:36.189883  444547 command_runner.go:130] > # Globally enable or disable metrics support.
	I0819 18:51:36.189887  444547 command_runner.go:130] > enable_metrics = true
	I0819 18:51:36.189891  444547 command_runner.go:130] > # Specify enabled metrics collectors.
	I0819 18:51:36.189895  444547 command_runner.go:130] > # Per default all metrics are enabled.
	I0819 18:51:36.189900  444547 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0819 18:51:36.189906  444547 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0819 18:51:36.189911  444547 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0819 18:51:36.189915  444547 command_runner.go:130] > # metrics_collectors = [
	I0819 18:51:36.189918  444547 command_runner.go:130] > # 	"operations",
	I0819 18:51:36.189923  444547 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0819 18:51:36.189927  444547 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0819 18:51:36.189931  444547 command_runner.go:130] > # 	"operations_errors",
	I0819 18:51:36.189935  444547 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0819 18:51:36.189938  444547 command_runner.go:130] > # 	"image_pulls_by_name",
	I0819 18:51:36.189946  444547 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0819 18:51:36.189950  444547 command_runner.go:130] > # 	"image_pulls_failures",
	I0819 18:51:36.189954  444547 command_runner.go:130] > # 	"image_pulls_successes",
	I0819 18:51:36.189958  444547 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0819 18:51:36.189962  444547 command_runner.go:130] > # 	"image_layer_reuse",
	I0819 18:51:36.189970  444547 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0819 18:51:36.189973  444547 command_runner.go:130] > # 	"containers_oom_total",
	I0819 18:51:36.189977  444547 command_runner.go:130] > # 	"containers_oom",
	I0819 18:51:36.189980  444547 command_runner.go:130] > # 	"processes_defunct",
	I0819 18:51:36.189984  444547 command_runner.go:130] > # 	"operations_total",
	I0819 18:51:36.189988  444547 command_runner.go:130] > # 	"operations_latency_seconds",
	I0819 18:51:36.189993  444547 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0819 18:51:36.189997  444547 command_runner.go:130] > # 	"operations_errors_total",
	I0819 18:51:36.190001  444547 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0819 18:51:36.190005  444547 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0819 18:51:36.190009  444547 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0819 18:51:36.190013  444547 command_runner.go:130] > # 	"image_pulls_success_total",
	I0819 18:51:36.190017  444547 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0819 18:51:36.190021  444547 command_runner.go:130] > # 	"containers_oom_count_total",
	I0819 18:51:36.190026  444547 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0819 18:51:36.190033  444547 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0819 18:51:36.190035  444547 command_runner.go:130] > # ]
	I0819 18:51:36.190040  444547 command_runner.go:130] > # The port on which the metrics server will listen.
	I0819 18:51:36.190046  444547 command_runner.go:130] > # metrics_port = 9090
	I0819 18:51:36.190051  444547 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0819 18:51:36.190055  444547 command_runner.go:130] > # metrics_socket = ""
	I0819 18:51:36.190061  444547 command_runner.go:130] > # The certificate for the secure metrics server.
	I0819 18:51:36.190069  444547 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0819 18:51:36.190075  444547 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0819 18:51:36.190082  444547 command_runner.go:130] > # certificate on any modification event.
	I0819 18:51:36.190085  444547 command_runner.go:130] > # metrics_cert = ""
	I0819 18:51:36.190090  444547 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0819 18:51:36.190097  444547 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0819 18:51:36.190101  444547 command_runner.go:130] > # metrics_key = ""
	I0819 18:51:36.190106  444547 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0819 18:51:36.190110  444547 command_runner.go:130] > [crio.tracing]
	I0819 18:51:36.190117  444547 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0819 18:51:36.190124  444547 command_runner.go:130] > # enable_tracing = false
	I0819 18:51:36.190129  444547 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0819 18:51:36.190135  444547 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0819 18:51:36.190142  444547 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0819 18:51:36.190147  444547 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0819 18:51:36.190151  444547 command_runner.go:130] > # CRI-O NRI configuration.
	I0819 18:51:36.190154  444547 command_runner.go:130] > [crio.nri]
	I0819 18:51:36.190158  444547 command_runner.go:130] > # Globally enable or disable NRI.
	I0819 18:51:36.190167  444547 command_runner.go:130] > # enable_nri = false
	I0819 18:51:36.190172  444547 command_runner.go:130] > # NRI socket to listen on.
	I0819 18:51:36.190177  444547 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0819 18:51:36.190183  444547 command_runner.go:130] > # NRI plugin directory to use.
	I0819 18:51:36.190188  444547 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0819 18:51:36.190194  444547 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0819 18:51:36.190198  444547 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0819 18:51:36.190205  444547 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0819 18:51:36.190209  444547 command_runner.go:130] > # nri_disable_connections = false
	I0819 18:51:36.190217  444547 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0819 18:51:36.190221  444547 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0819 18:51:36.190228  444547 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0819 18:51:36.190233  444547 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0819 18:51:36.190238  444547 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0819 18:51:36.190243  444547 command_runner.go:130] > [crio.stats]
	I0819 18:51:36.190249  444547 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0819 18:51:36.190255  444547 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0819 18:51:36.190259  444547 command_runner.go:130] > # stats_collection_period = 0
	I0819 18:51:36.190450  444547 command_runner.go:130] ! time="2024-08-19 18:51:36.161529726Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0819 18:51:36.190501  444547 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0819 18:51:36.190630  444547 cni.go:84] Creating CNI manager for ""
	I0819 18:51:36.190641  444547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:51:36.190651  444547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:51:36.190674  444547 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.22 APIServerPort:8441 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-124593 NodeName:functional-124593 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:51:36.190815  444547 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.22
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-124593"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:51:36.190886  444547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:51:36.200955  444547 command_runner.go:130] > kubeadm
	I0819 18:51:36.200981  444547 command_runner.go:130] > kubectl
	I0819 18:51:36.200986  444547 command_runner.go:130] > kubelet
	I0819 18:51:36.201016  444547 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:51:36.201072  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 18:51:36.211041  444547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0819 18:51:36.228264  444547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:51:36.245722  444547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0819 18:51:36.263018  444547 ssh_runner.go:195] Run: grep 192.168.39.22	control-plane.minikube.internal$ /etc/hosts
	I0819 18:51:36.267130  444547 command_runner.go:130] > 192.168.39.22	control-plane.minikube.internal
	I0819 18:51:36.267229  444547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:51:36.398107  444547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:51:36.412895  444547 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593 for IP: 192.168.39.22
	I0819 18:51:36.412924  444547 certs.go:194] generating shared ca certs ...
	I0819 18:51:36.412943  444547 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:51:36.413154  444547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 18:51:36.413203  444547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 18:51:36.413217  444547 certs.go:256] generating profile certs ...
	I0819 18:51:36.413317  444547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.key
	I0819 18:51:36.413414  444547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.key.aa5a99d1
	I0819 18:51:36.413463  444547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.key
	I0819 18:51:36.413478  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 18:51:36.413496  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 18:51:36.413514  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 18:51:36.413543  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 18:51:36.413558  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 18:51:36.413577  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 18:51:36.413596  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 18:51:36.413612  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 18:51:36.413684  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 18:51:36.413728  444547 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 18:51:36.413741  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 18:51:36.413782  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 18:51:36.413816  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:51:36.413853  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 18:51:36.413906  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 18:51:36.413944  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.413964  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem -> /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.413981  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.414774  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:51:36.439176  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:51:36.463796  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:51:36.490998  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 18:51:36.514746  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 18:51:36.538661  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 18:51:36.562630  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:51:36.586739  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 18:51:36.610889  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:51:36.634562  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 18:51:36.658286  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 18:51:36.681715  444547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:51:36.698451  444547 ssh_runner.go:195] Run: openssl version
	I0819 18:51:36.704220  444547 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0819 18:51:36.704339  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 18:51:36.715389  444547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.720025  444547 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.720080  444547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.720142  444547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.725901  444547 command_runner.go:130] > 51391683
	I0819 18:51:36.726015  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 18:51:36.736206  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 18:51:36.747737  444547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.752558  444547 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.752599  444547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.752642  444547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.758223  444547 command_runner.go:130] > 3ec20f2e
	I0819 18:51:36.758300  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:51:36.767946  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:51:36.779143  444547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.783850  444547 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.783902  444547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.783950  444547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.789800  444547 command_runner.go:130] > b5213941
	I0819 18:51:36.789894  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:51:36.799700  444547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:51:36.804144  444547 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:51:36.804180  444547 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0819 18:51:36.804188  444547 command_runner.go:130] > Device: 253,1	Inode: 4197398     Links: 1
	I0819 18:51:36.804194  444547 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 18:51:36.804201  444547 command_runner.go:130] > Access: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804206  444547 command_runner.go:130] > Modify: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804217  444547 command_runner.go:130] > Change: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804222  444547 command_runner.go:130] >  Birth: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804284  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 18:51:36.810230  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.810339  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 18:51:36.816159  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.816241  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 18:51:36.821909  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.822019  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 18:51:36.827758  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.827847  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 18:51:36.833329  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.833420  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 18:51:36.838995  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.839152  444547 kubeadm.go:392] StartCluster: {Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:51:36.839251  444547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:51:36.839310  444547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:51:36.874453  444547 command_runner.go:130] > e908c3e4c32bde71b6441e3ee7b44b5559f7c82f0a6a46e750222d6420ee5768
	I0819 18:51:36.874803  444547 command_runner.go:130] > 790104913a298ce37e7cba7691f8e4d617c4a8638cb821e244b70555efd66fbf
	I0819 18:51:36.874823  444547 command_runner.go:130] > aa84abbb4c9f6505401955b079afdb2f11c046f6879ee08a957007aaca875b03
	I0819 18:51:36.874834  444547 command_runner.go:130] > d8bfd36fbe96397e082982d91cabf0e8c731c94d88ed6c685f08a18b2da4cc6c
	I0819 18:51:36.874843  444547 command_runner.go:130] > e4040e2a8622453f1ef2fd04190fd96bea2fd3de919c6b06deaa38e83d38cf5b
	I0819 18:51:36.874899  444547 command_runner.go:130] > 8df7ac76fe134c848ddad74ca75f51e5c19afffdbd0712cb3d74333cdc6b91dc
	I0819 18:51:36.875009  444547 command_runner.go:130] > 94e0fe7cb19015bd08c7406824ba63263d67eec22f1240ef0f0eaff258976b4f
	I0819 18:51:36.875035  444547 command_runner.go:130] > 871ee5a26fc4d8fe6f04e66a22c78ba6e6b80077bd1066ab3a3c1acb639de113
	I0819 18:51:36.875045  444547 command_runner.go:130] > 70ce15fbc3bc32cd55767aab9cde75fc9d6f452d9c22c53034e5c2af14442b32
	I0819 18:51:36.875236  444547 command_runner.go:130] > 7703464bfd87d33565890b044096dcd7f50a8b1340ec99f2a88431946c69fc1b
	I0819 18:51:36.875268  444547 command_runner.go:130] > d65fc62d1475e4da38a8c95d6b13d520659291d23ac972a2d197d382a7fa6027
	I0819 18:51:36.875360  444547 command_runner.go:130] > d89c8be1ccfda0fbaa81ee519dc2e56af6349b6060b1afd9b3ac512310edc348
	I0819 18:51:36.875408  444547 command_runner.go:130] > e38111b143b726d6b06442db0379d3ecea609946581cb76904945ed68a359c74
	I0819 18:51:36.876958  444547 cri.go:89] found id: "e908c3e4c32bde71b6441e3ee7b44b5559f7c82f0a6a46e750222d6420ee5768"
	I0819 18:51:36.876978  444547 cri.go:89] found id: "790104913a298ce37e7cba7691f8e4d617c4a8638cb821e244b70555efd66fbf"
	I0819 18:51:36.876984  444547 cri.go:89] found id: "aa84abbb4c9f6505401955b079afdb2f11c046f6879ee08a957007aaca875b03"
	I0819 18:51:36.876989  444547 cri.go:89] found id: "d8bfd36fbe96397e082982d91cabf0e8c731c94d88ed6c685f08a18b2da4cc6c"
	I0819 18:51:36.876993  444547 cri.go:89] found id: "e4040e2a8622453f1ef2fd04190fd96bea2fd3de919c6b06deaa38e83d38cf5b"
	I0819 18:51:36.876998  444547 cri.go:89] found id: "8df7ac76fe134c848ddad74ca75f51e5c19afffdbd0712cb3d74333cdc6b91dc"
	I0819 18:51:36.877002  444547 cri.go:89] found id: "94e0fe7cb19015bd08c7406824ba63263d67eec22f1240ef0f0eaff258976b4f"
	I0819 18:51:36.877006  444547 cri.go:89] found id: "871ee5a26fc4d8fe6f04e66a22c78ba6e6b80077bd1066ab3a3c1acb639de113"
	I0819 18:51:36.877010  444547 cri.go:89] found id: "70ce15fbc3bc32cd55767aab9cde75fc9d6f452d9c22c53034e5c2af14442b32"
	I0819 18:51:36.877024  444547 cri.go:89] found id: "7703464bfd87d33565890b044096dcd7f50a8b1340ec99f2a88431946c69fc1b"
	I0819 18:51:36.877032  444547 cri.go:89] found id: "d65fc62d1475e4da38a8c95d6b13d520659291d23ac972a2d197d382a7fa6027"
	I0819 18:51:36.877036  444547 cri.go:89] found id: "d89c8be1ccfda0fbaa81ee519dc2e56af6349b6060b1afd9b3ac512310edc348"
	I0819 18:51:36.877040  444547 cri.go:89] found id: "e38111b143b726d6b06442db0379d3ecea609946581cb76904945ed68a359c74"
	I0819 18:51:36.877044  444547 cri.go:89] found id: ""
	I0819 18:51:36.877087  444547 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.830765818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094240830742347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c8a61db-cfb6-4fcc-a1e8-58a71ff47184 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.831201081Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb1e6b8b-f31f-46d5-bdbf-a0f2356171a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.831254319Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb1e6b8b-f31f-46d5-bdbf-a0f2356171a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.831344101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329128126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restart
Count: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d,PodSandboxId:59013506b9174ea339d22d4e1595ce75d5914ac9c880e68a9b739018f586ec2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094163327692056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15de45e6effb382c12ca8494f33bff76,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 15,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca,PodSandboxId:ddca0e39cb48d9da271cf14439314e1118baec86bc841a867aeb3fe42df61ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724093988918818534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb1e6b8b-f31f-46d5-bdbf-a0f2356171a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.864549462Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=494bb9e6-b150-4045-b0b2-f6d56257859f name=/runtime.v1.RuntimeService/Version
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.864639529Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=494bb9e6-b150-4045-b0b2-f6d56257859f name=/runtime.v1.RuntimeService/Version
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.866011128Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac96f351-4795-4a1d-b4cc-23961b6df9b1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.866574800Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094240866460894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac96f351-4795-4a1d-b4cc-23961b6df9b1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.867241085Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01a81452-66ed-40e8-93be-3aceae90f25d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.867305607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01a81452-66ed-40e8-93be-3aceae90f25d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.867395736Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329128126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restart
Count: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d,PodSandboxId:59013506b9174ea339d22d4e1595ce75d5914ac9c880e68a9b739018f586ec2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094163327692056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15de45e6effb382c12ca8494f33bff76,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 15,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca,PodSandboxId:ddca0e39cb48d9da271cf14439314e1118baec86bc841a867aeb3fe42df61ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724093988918818534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01a81452-66ed-40e8-93be-3aceae90f25d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.905247704Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2fa7cc73-e081-4060-ba37-757ed3f0c4ea name=/runtime.v1.RuntimeService/Version
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.905367016Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2fa7cc73-e081-4060-ba37-757ed3f0c4ea name=/runtime.v1.RuntimeService/Version
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.906471350Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e308ea9d-c309-4007-82a8-5f1e62a1b1e7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.907038415Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094240907014143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e308ea9d-c309-4007-82a8-5f1e62a1b1e7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.907699658Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=480017fe-88a3-4b04-a916-86483af8ea31 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.907774285Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=480017fe-88a3-4b04-a916-86483af8ea31 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.907865486Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329128126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restart
Count: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d,PodSandboxId:59013506b9174ea339d22d4e1595ce75d5914ac9c880e68a9b739018f586ec2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094163327692056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15de45e6effb382c12ca8494f33bff76,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 15,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca,PodSandboxId:ddca0e39cb48d9da271cf14439314e1118baec86bc841a867aeb3fe42df61ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724093988918818534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=480017fe-88a3-4b04-a916-86483af8ea31 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.939595790Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1124bdc3-a267-44ef-ae26-8aa08f43253e name=/runtime.v1.RuntimeService/Version
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.939682189Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1124bdc3-a267-44ef-ae26-8aa08f43253e name=/runtime.v1.RuntimeService/Version
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.940976046Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e3a040a5-aab0-4db9-9c9d-defed87eabdd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.941535208Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094240941474605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3a040a5-aab0-4db9-9c9d-defed87eabdd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.942074872Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2914482-6e52-4e44-9783-6b3426b9904f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.942144256Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2914482-6e52-4e44-9783-6b3426b9904f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:00 functional-124593 crio[3397]: time="2024-08-19 19:04:00.942267625Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329128126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restart
Count: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d,PodSandboxId:59013506b9174ea339d22d4e1595ce75d5914ac9c880e68a9b739018f586ec2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094163327692056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15de45e6effb382c12ca8494f33bff76,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 15,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca,PodSandboxId:ddca0e39cb48d9da271cf14439314e1118baec86bc841a867aeb3fe42df61ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724093988918818534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2914482-6e52-4e44-9783-6b3426b9904f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e764198234f75       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   About a minute ago   Exited              kube-controller-manager   15                  1b98c8cb37fd8       kube-controller-manager-functional-124593
	effebbec1cbf2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   About a minute ago   Exited              kube-apiserver            15                  59013506b9174       kube-apiserver-functional-124593
	e3ddc8f73f9e6       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   4 minutes ago        Running             kube-scheduler            4                   ddca0e39cb48d       kube-scheduler-functional-124593
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.066009] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.197712] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.124470] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.281624] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +4.011758] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +4.140421] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +0.057144] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.989777] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	[  +0.082461] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.724179] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +0.114451] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.497778] kauditd_printk_skb: 98 callbacks suppressed
	[Aug19 18:50] systemd-fstab-generator[3042]: Ignoring "noauto" option for root device
	[  +0.214760] systemd-fstab-generator[3119]: Ignoring "noauto" option for root device
	[  +0.240969] systemd-fstab-generator[3138]: Ignoring "noauto" option for root device
	[  +0.217290] systemd-fstab-generator[3183]: Ignoring "noauto" option for root device
	[  +0.371422] systemd-fstab-generator[3242]: Ignoring "noauto" option for root device
	[Aug19 18:51] systemd-fstab-generator[3507]: Ignoring "noauto" option for root device
	[  +0.085616] kauditd_printk_skb: 184 callbacks suppressed
	[  +1.984129] systemd-fstab-generator[3627]: Ignoring "noauto" option for root device
	[Aug19 18:52] kauditd_printk_skb: 81 callbacks suppressed
	[Aug19 18:55] systemd-fstab-generator[9158]: Ignoring "noauto" option for root device
	[Aug19 18:56] kauditd_printk_skb: 70 callbacks suppressed
	[Aug19 18:59] systemd-fstab-generator[10102]: Ignoring "noauto" option for root device
	[Aug19 19:00] kauditd_printk_skb: 54 callbacks suppressed
	
	
	==> kernel <==
	 19:04:01 up 14 min,  0 users,  load average: 0.09, 0.14, 0.10
	Linux functional-124593 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d] <==
	I0819 19:02:43.467753       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0819 19:02:43.743186       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:43.743280       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0819 19:02:43.751731       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0819 19:02:43.755112       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 19:02:43.758721       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 19:02:43.758852       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 19:02:43.759065       1 instance.go:232] Using reconciler: lease
	W0819 19:02:43.760135       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:44.743832       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:44.743963       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:44.761569       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:46.185098       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:46.207828       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:46.382784       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:48.822814       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:48.974921       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:49.351895       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:53.115051       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:53.237838       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:54.161281       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:59.099479       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:59.492816       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:03:01.794931       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0819 19:03:03.759664       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9] <==
	I0819 19:02:44.745523       1 serving.go:386] Generated self-signed cert in-memory
	I0819 19:02:44.990908       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 19:02:44.990991       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:02:44.992289       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 19:02:44.992410       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 19:02:44.992616       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 19:02:44.992692       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0819 19:03:04.995138       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.22:8441/healthz\": dial tcp 192.168.39.22:8441: connect: connection refused"
	
	
	==> kube-scheduler [e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca] <==
	E0819 19:03:25.611985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.22:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:29.616112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.22:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:29.616172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.22:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:32.074182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.22:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:32.074280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.22:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:34.545647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.22:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:34.545700       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.22:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:46.993572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.22:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:46.993633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.22:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:49.036950       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.22:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:49.037018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.22:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:49.224105       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.22:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:49.224150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.22:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:50.723059       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.22:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:50.723109       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.22:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:51.629827       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.22:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:51.629910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.22:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:56.598327       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.22:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:56.598382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.22:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:57.267271       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.22:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:57.267349       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.22:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:04:00.473828       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.22:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:04:00.473873       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.22:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:04:00.878775       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.22:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:04:00.878819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.22:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Aug 19 19:03:48 functional-124593 kubelet[10109]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 19:03:48 functional-124593 kubelet[10109]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 19:03:48 functional-124593 kubelet[10109]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.385433   10109 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094228385106955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.385459   10109 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094228385106955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.752200   10109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/default/events\": dial tcp 192.168.39.22:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-124593.17ed3659059f9af6  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-124593,UID:functional-124593,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-124593 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-124593,},FirstTimestamp:2024-08-19 18:59:48.327103222 +0000 UTC m=+0.349325093,LastTimestamp:2024-08-19 18:59:48.327103222 +0000 UTC m=+0.349325093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:fu
nctional-124593,}"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: I0819 19:03:48.956588   10109 kubelet_node_status.go:72] "Attempting to register node" node="functional-124593"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.957346   10109 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8441/api/v1/nodes\": dial tcp 192.168.39.22:8441: connect: connection refused" node="functional-124593"
	Aug 19 19:03:52 functional-124593 kubelet[10109]: W0819 19:03:52.762074   10109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-124593&limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	Aug 19 19:03:52 functional-124593 kubelet[10109]: E0819 19:03:52.762188   10109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-124593&limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	Aug 19 19:03:53 functional-124593 kubelet[10109]: I0819 19:03:53.319902   10109 scope.go:117] "RemoveContainer" containerID="effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d"
	Aug 19 19:03:53 functional-124593 kubelet[10109]: E0819 19:03:53.320038   10109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-124593_kube-system(15de45e6effb382c12ca8494f33bff76)\"" pod="kube-system/kube-apiserver-functional-124593" podUID="15de45e6effb382c12ca8494f33bff76"
	Aug 19 19:03:53 functional-124593 kubelet[10109]: E0819 19:03:53.777098   10109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-124593?timeout=10s\": dial tcp 192.168.39.22:8441: connect: connection refused" interval="7s"
	Aug 19 19:03:55 functional-124593 kubelet[10109]: I0819 19:03:55.958779   10109 kubelet_node_status.go:72] "Attempting to register node" node="functional-124593"
	Aug 19 19:03:55 functional-124593 kubelet[10109]: E0819 19:03:55.959704   10109 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8441/api/v1/nodes\": dial tcp 192.168.39.22:8441: connect: connection refused" node="functional-124593"
	Aug 19 19:03:56 functional-124593 kubelet[10109]: E0819 19:03:56.914816   10109 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://control-plane.minikube.internal:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	Aug 19 19:03:58 functional-124593 kubelet[10109]: E0819 19:03:58.326664   10109 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_etcd_etcd-functional-124593_kube-system_1d81c5d63cba07001a82e239314e39e2_1\" is already in use by 847f5422f7736630cbb85c950b0ce7fee365173fd629c7c35c4baca6131ab56d. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="fb4264c69f603d50f969b7ac2f0dad4c593bae0da887198d6e0d16aab460b73b"
	Aug 19 19:03:58 functional-124593 kubelet[10109]: E0819 19:03:58.326817   10109 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:etcd,Image:registry.k8s.io/etcd:3.5.15-0,Command:[etcd --advertise-client-urls=https://192.168.39.22:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/minikube/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://192.168.39.22:2380 --initial-cluster=functional-124593=https://192.168.39.22:2380 --key-file=/var/lib/minikube/certs/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.39.22:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.39.22:2380 --name=functional-124593 --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/var/lib/minikube/certs/etcd/peer.key --peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.
crt --proxy-refresh-interval=70000 --snapshot-count=10000 --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{104857600 0} {<nil>} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etcd-data,ReadOnly:false,MountPath:/var/lib/minikube/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-certs,ReadOnly:false,MountPath:/var/lib/minikube/certs/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{P
robeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-functional-124593_kube-system(1d81c5d63cba07001a82e239314e39e2): CreateContainerError: the c
ontainer name \"k8s_etcd_etcd-functional-124593_kube-system_1d81c5d63cba07001a82e239314e39e2_1\" is already in use by 847f5422f7736630cbb85c950b0ce7fee365173fd629c7c35c4baca6131ab56d. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Aug 19 19:03:58 functional-124593 kubelet[10109]: E0819 19:03:58.328076   10109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-functional-124593_kube-system_1d81c5d63cba07001a82e239314e39e2_1\\\" is already in use by 847f5422f7736630cbb85c950b0ce7fee365173fd629c7c35c4baca6131ab56d. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-functional-124593" podUID="1d81c5d63cba07001a82e239314e39e2"
	Aug 19 19:03:58 functional-124593 kubelet[10109]: E0819 19:03:58.387646   10109 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094238387370954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:03:58 functional-124593 kubelet[10109]: E0819 19:03:58.387683   10109 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094238387370954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:03:58 functional-124593 kubelet[10109]: E0819 19:03:58.754140   10109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/default/events\": dial tcp 192.168.39.22:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-124593.17ed3659059f9af6  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-124593,UID:functional-124593,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-124593 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-124593,},FirstTimestamp:2024-08-19 18:59:48.327103222 +0000 UTC m=+0.349325093,LastTimestamp:2024-08-19 18:59:48.327103222 +0000 UTC m=+0.349325093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:fu
nctional-124593,}"
	Aug 19 19:03:59 functional-124593 kubelet[10109]: I0819 19:03:59.319810   10109 scope.go:117] "RemoveContainer" containerID="e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9"
	Aug 19 19:03:59 functional-124593 kubelet[10109]: E0819 19:03:59.319967   10109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-124593_kube-system(c71ff42fdd5902541920b0f91ca1cbbc)\"" pod="kube-system/kube-controller-manager-functional-124593" podUID="c71ff42fdd5902541920b0f91ca1cbbc"
	Aug 19 19:04:00 functional-124593 kubelet[10109]: E0819 19:04:00.778272   10109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-124593?timeout=10s\": dial tcp 192.168.39.22:8441: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:04:00.560935  448342 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-430949/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-124593 -n functional-124593
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-124593 -n functional-124593: exit status 2 (228.183003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-124593" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-124593 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-124593 get pods: exit status 1 (99.679566ms)

                                                
                                                
** stderr ** 
	E0819 19:04:01.724432  448408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.22:8441/api?timeout=32s\": dial tcp 192.168.39.22:8441: connect: connection refused"
	E0819 19:04:01.726787  448408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.22:8441/api?timeout=32s\": dial tcp 192.168.39.22:8441: connect: connection refused"
	E0819 19:04:01.728433  448408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.22:8441/api?timeout=32s\": dial tcp 192.168.39.22:8441: connect: connection refused"
	E0819 19:04:01.730116  448408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.22:8441/api?timeout=32s\": dial tcp 192.168.39.22:8441: connect: connection refused"
	E0819 19:04:01.731770  448408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.22:8441/api?timeout=32s\": dial tcp 192.168.39.22:8441: connect: connection refused"
	The connection to the server 192.168.39.22:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-124593 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-124593 -n functional-124593
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-124593 -n functional-124593: exit status 2 (224.977669ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 logs -n 25
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                    Args                     |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| pause   | nospam-212543 --log_dir                     | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 pause                    |                   |         |         |                     |                     |
	| unpause | nospam-212543 --log_dir                     | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 unpause                  |                   |         |         |                     |                     |
	| unpause | nospam-212543 --log_dir                     | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 unpause                  |                   |         |         |                     |                     |
	| unpause | nospam-212543 --log_dir                     | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 unpause                  |                   |         |         |                     |                     |
	| stop    | nospam-212543 --log_dir                     | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:48 UTC |
	|         | /tmp/nospam-212543 stop                     |                   |         |         |                     |                     |
	| stop    | nospam-212543 --log_dir                     | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:48 UTC | 19 Aug 24 18:49 UTC |
	|         | /tmp/nospam-212543 stop                     |                   |         |         |                     |                     |
	| stop    | nospam-212543 --log_dir                     | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:49 UTC | 19 Aug 24 18:49 UTC |
	|         | /tmp/nospam-212543 stop                     |                   |         |         |                     |                     |
	| delete  | -p nospam-212543                            | nospam-212543     | jenkins | v1.33.1 | 19 Aug 24 18:49 UTC | 19 Aug 24 18:49 UTC |
	| start   | -p functional-124593                        | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 18:49 UTC | 19 Aug 24 18:49 UTC |
	|         | --memory=4000                               |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                       |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                    |                   |         |         |                     |                     |
	|         | --container-runtime=crio                    |                   |         |         |                     |                     |
	| start   | -p functional-124593                        | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 18:49 UTC |                     |
	|         | --alsologtostderr -v=8                      |                   |         |         |                     |                     |
	| cache   | functional-124593 cache add                 | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | registry.k8s.io/pause:3.1                   |                   |         |         |                     |                     |
	| cache   | functional-124593 cache add                 | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | registry.k8s.io/pause:3.3                   |                   |         |         |                     |                     |
	| cache   | functional-124593 cache add                 | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| cache   | functional-124593 cache add                 | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | minikube-local-cache-test:functional-124593 |                   |         |         |                     |                     |
	| cache   | functional-124593 cache delete              | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | minikube-local-cache-test:functional-124593 |                   |         |         |                     |                     |
	| cache   | delete                                      | minikube          | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | registry.k8s.io/pause:3.3                   |                   |         |         |                     |                     |
	| cache   | list                                        | minikube          | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	| ssh     | functional-124593 ssh sudo                  | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | crictl images                               |                   |         |         |                     |                     |
	| ssh     | functional-124593                           | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | ssh sudo crictl rmi                         |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| ssh     | functional-124593 ssh                       | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC |                     |
	|         | sudo crictl inspecti                        |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| cache   | functional-124593 cache reload              | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	| ssh     | functional-124593 ssh                       | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:04 UTC |
	|         | sudo crictl inspecti                        |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| cache   | delete                                      | minikube          | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | registry.k8s.io/pause:3.1                   |                   |         |         |                     |                     |
	| cache   | delete                                      | minikube          | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| kubectl | functional-124593 kubectl --                | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | --context functional-124593                 |                   |         |         |                     |                     |
	|         | get pods                                    |                   |         |         |                     |                     |
	|---------|---------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:49:56
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:49:56.790328  444547 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:49:56.790453  444547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:49:56.790459  444547 out.go:358] Setting ErrFile to fd 2...
	I0819 18:49:56.790463  444547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:49:56.790638  444547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 18:49:56.791174  444547 out.go:352] Setting JSON to false
	I0819 18:49:56.792114  444547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9148,"bootTime":1724084249,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:49:56.792181  444547 start.go:139] virtualization: kvm guest
	I0819 18:49:56.794648  444547 out.go:177] * [functional-124593] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:49:56.796256  444547 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 18:49:56.796302  444547 notify.go:220] Checking for updates...
	I0819 18:49:56.799145  444547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:49:56.800604  444547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 18:49:56.802061  444547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 18:49:56.803353  444547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:49:56.804793  444547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:49:56.806582  444547 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:49:56.806680  444547 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 18:49:56.807152  444547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:49:56.807235  444547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:49:56.823439  444547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0819 18:49:56.823898  444547 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:49:56.824445  444547 main.go:141] libmachine: Using API Version  1
	I0819 18:49:56.824484  444547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:49:56.824923  444547 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:49:56.825223  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:49:56.864107  444547 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 18:49:56.865533  444547 start.go:297] selected driver: kvm2
	I0819 18:49:56.865559  444547 start.go:901] validating driver "kvm2" against &{Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:49:56.865676  444547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:49:56.866051  444547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:49:56.866145  444547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:49:56.882415  444547 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:49:56.883177  444547 cni.go:84] Creating CNI manager for ""
	I0819 18:49:56.883193  444547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:49:56.883244  444547 start.go:340] cluster config:
	{Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-124593 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:49:56.883396  444547 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:49:56.885199  444547 out.go:177] * Starting "functional-124593" primary control-plane node in "functional-124593" cluster
	I0819 18:49:56.886649  444547 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:49:56.886699  444547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:49:56.886708  444547 cache.go:56] Caching tarball of preloaded images
	I0819 18:49:56.886828  444547 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:49:56.886844  444547 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:49:56.886977  444547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/config.json ...
	I0819 18:49:56.887255  444547 start.go:360] acquireMachinesLock for functional-124593: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:49:56.887316  444547 start.go:364] duration metric: took 31.483µs to acquireMachinesLock for "functional-124593"
	I0819 18:49:56.887333  444547 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:49:56.887345  444547 fix.go:54] fixHost starting: 
	I0819 18:49:56.887711  444547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:49:56.887765  444547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:49:56.903210  444547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43899
	I0819 18:49:56.903686  444547 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:49:56.904263  444547 main.go:141] libmachine: Using API Version  1
	I0819 18:49:56.904298  444547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:49:56.904680  444547 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:49:56.904935  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:49:56.905158  444547 main.go:141] libmachine: (functional-124593) Calling .GetState
	I0819 18:49:56.906833  444547 fix.go:112] recreateIfNeeded on functional-124593: state=Running err=<nil>
	W0819 18:49:56.906856  444547 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:49:56.908782  444547 out.go:177] * Updating the running kvm2 "functional-124593" VM ...
	I0819 18:49:56.910443  444547 machine.go:93] provisionDockerMachine start ...
	I0819 18:49:56.910478  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:49:56.910823  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:56.913259  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:56.913615  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:56.913638  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:56.913753  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:56.914043  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:56.914207  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:56.914341  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:56.914485  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:56.914684  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:56.914697  444547 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:49:57.017550  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-124593
	
	I0819 18:49:57.017585  444547 main.go:141] libmachine: (functional-124593) Calling .GetMachineName
	I0819 18:49:57.017923  444547 buildroot.go:166] provisioning hostname "functional-124593"
	I0819 18:49:57.017956  444547 main.go:141] libmachine: (functional-124593) Calling .GetMachineName
	I0819 18:49:57.018164  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.021185  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.021551  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.021598  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.021780  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.022011  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.022177  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.022309  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.022452  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:57.022654  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:57.022668  444547 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-124593 && echo "functional-124593" | sudo tee /etc/hostname
	I0819 18:49:57.141478  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-124593
	
	I0819 18:49:57.141514  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.144157  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.144414  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.144449  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.144722  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.144969  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.145192  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.145388  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.145570  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:57.145756  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:57.145776  444547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-124593' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-124593/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-124593' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:49:57.249989  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:49:57.250034  444547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 18:49:57.250086  444547 buildroot.go:174] setting up certificates
	I0819 18:49:57.250099  444547 provision.go:84] configureAuth start
	I0819 18:49:57.250118  444547 main.go:141] libmachine: (functional-124593) Calling .GetMachineName
	I0819 18:49:57.250442  444547 main.go:141] libmachine: (functional-124593) Calling .GetIP
	I0819 18:49:57.253181  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.253490  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.253519  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.253712  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.256213  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.256541  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.256586  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.256752  444547 provision.go:143] copyHostCerts
	I0819 18:49:57.256784  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 18:49:57.256824  444547 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 18:49:57.256848  444547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 18:49:57.256918  444547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 18:49:57.257021  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 18:49:57.257043  444547 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 18:49:57.257048  444547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 18:49:57.257071  444547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 18:49:57.257122  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 18:49:57.257160  444547 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 18:49:57.257176  444547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 18:49:57.257198  444547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 18:49:57.257249  444547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.functional-124593 san=[127.0.0.1 192.168.39.22 functional-124593 localhost minikube]
	I0819 18:49:57.505075  444547 provision.go:177] copyRemoteCerts
	I0819 18:49:57.505163  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:49:57.505194  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.508248  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.508654  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.508690  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.508942  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.509160  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.509381  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.509556  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:49:57.591978  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 18:49:57.592075  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 18:49:57.620626  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 18:49:57.620699  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 18:49:57.646085  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 18:49:57.646168  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 18:49:57.671918  444547 provision.go:87] duration metric: took 421.80001ms to configureAuth
	I0819 18:49:57.671954  444547 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:49:57.672176  444547 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:49:57.672267  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:49:57.675054  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.675420  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:49:57.675456  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:49:57.675667  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:49:57.675902  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.676057  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:49:57.676211  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:49:57.676410  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:49:57.676596  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:49:57.676611  444547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:50:03.241286  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:50:03.241321  444547 machine.go:96] duration metric: took 6.330855619s to provisionDockerMachine
	I0819 18:50:03.241334  444547 start.go:293] postStartSetup for "functional-124593" (driver="kvm2")
	I0819 18:50:03.241346  444547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:50:03.241368  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.241892  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:50:03.241919  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.244822  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.245262  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.245291  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.245469  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.245716  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.245889  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.246048  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:50:03.327892  444547 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:50:03.332233  444547 command_runner.go:130] > NAME=Buildroot
	I0819 18:50:03.332262  444547 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 18:50:03.332268  444547 command_runner.go:130] > ID=buildroot
	I0819 18:50:03.332276  444547 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 18:50:03.332284  444547 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 18:50:03.332381  444547 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:50:03.332400  444547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 18:50:03.332476  444547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 18:50:03.332579  444547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 18:50:03.332593  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /etc/ssl/certs/4381592.pem
	I0819 18:50:03.332685  444547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/test/nested/copy/438159/hosts -> hosts in /etc/test/nested/copy/438159
	I0819 18:50:03.332692  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/test/nested/copy/438159/hosts -> /etc/test/nested/copy/438159/hosts
	I0819 18:50:03.332732  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/438159
	I0819 18:50:03.343618  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 18:50:03.367775  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/test/nested/copy/438159/hosts --> /etc/test/nested/copy/438159/hosts (40 bytes)
	I0819 18:50:03.392035  444547 start.go:296] duration metric: took 150.684705ms for postStartSetup
	I0819 18:50:03.392093  444547 fix.go:56] duration metric: took 6.504748451s for fixHost
	I0819 18:50:03.392120  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.394902  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.395203  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.395231  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.395450  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.395682  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.395876  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.396030  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.396215  444547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:50:03.396420  444547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0819 18:50:03.396434  444547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:50:03.498031  444547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724093403.488650243
	
	I0819 18:50:03.498062  444547 fix.go:216] guest clock: 1724093403.488650243
	I0819 18:50:03.498069  444547 fix.go:229] Guest: 2024-08-19 18:50:03.488650243 +0000 UTC Remote: 2024-08-19 18:50:03.392098301 +0000 UTC m=+6.637869514 (delta=96.551942ms)
	I0819 18:50:03.498115  444547 fix.go:200] guest clock delta is within tolerance: 96.551942ms
	I0819 18:50:03.498121  444547 start.go:83] releasing machines lock for "functional-124593", held for 6.610795712s
	I0819 18:50:03.498146  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.498456  444547 main.go:141] libmachine: (functional-124593) Calling .GetIP
	I0819 18:50:03.501197  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.501685  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.501717  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.501963  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.502567  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.502825  444547 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 18:50:03.502931  444547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:50:03.502977  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.503104  444547 ssh_runner.go:195] Run: cat /version.json
	I0819 18:50:03.503130  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
	I0819 18:50:03.505641  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.505904  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.505942  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.505982  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.506089  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.506248  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:50:03.506286  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:50:03.506326  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.506510  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.506529  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
	I0819 18:50:03.506705  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
	I0819 18:50:03.506709  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:50:03.506856  444547 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
	I0819 18:50:03.507023  444547 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
	I0819 18:50:03.596444  444547 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 18:50:03.596676  444547 ssh_runner.go:195] Run: systemctl --version
	I0819 18:50:03.642156  444547 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 18:50:03.642205  444547 command_runner.go:130] > systemd 252 (252)
	I0819 18:50:03.642223  444547 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 18:50:03.642284  444547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:50:04.032467  444547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 18:50:04.057730  444547 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 18:50:04.057919  444547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:50:04.058009  444547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:50:04.094792  444547 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 18:50:04.094824  444547 start.go:495] detecting cgroup driver to use...
	I0819 18:50:04.094892  444547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:50:04.216404  444547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:50:04.250117  444547 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:50:04.250182  444547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:50:04.298450  444547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:50:04.329276  444547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:50:04.576464  444547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:50:04.796403  444547 docker.go:233] disabling docker service ...
	I0819 18:50:04.796509  444547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:50:04.824051  444547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:50:04.841929  444547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:50:05.032450  444547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:50:05.230662  444547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:50:05.261270  444547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:50:05.307751  444547 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0819 18:50:05.308002  444547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:50:05.308071  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.325985  444547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:50:05.326072  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.340857  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.355923  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.368797  444547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:50:05.384107  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.396132  444547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.407497  444547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:50:05.421137  444547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:50:05.431493  444547 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 18:50:05.431832  444547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:50:05.444023  444547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:50:05.610160  444547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:51:35.953940  444547 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.343723561s)
	I0819 18:51:35.953984  444547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:51:35.954042  444547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:51:35.958905  444547 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0819 18:51:35.958943  444547 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0819 18:51:35.958954  444547 command_runner.go:130] > Device: 0,22	Inode: 1653        Links: 1
	I0819 18:51:35.958965  444547 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 18:51:35.958973  444547 command_runner.go:130] > Access: 2024-08-19 18:51:35.749660900 +0000
	I0819 18:51:35.958982  444547 command_runner.go:130] > Modify: 2024-08-19 18:51:35.749660900 +0000
	I0819 18:51:35.958993  444547 command_runner.go:130] > Change: 2024-08-19 18:51:35.749660900 +0000
	I0819 18:51:35.958999  444547 command_runner.go:130] >  Birth: -
	I0819 18:51:35.959026  444547 start.go:563] Will wait 60s for crictl version
	I0819 18:51:35.959080  444547 ssh_runner.go:195] Run: which crictl
	I0819 18:51:35.962908  444547 command_runner.go:130] > /usr/bin/crictl
	I0819 18:51:35.963010  444547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:51:35.995379  444547 command_runner.go:130] > Version:  0.1.0
	I0819 18:51:35.995417  444547 command_runner.go:130] > RuntimeName:  cri-o
	I0819 18:51:35.995425  444547 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0819 18:51:35.995433  444547 command_runner.go:130] > RuntimeApiVersion:  v1
	I0819 18:51:35.996527  444547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:51:35.996626  444547 ssh_runner.go:195] Run: crio --version
	I0819 18:51:36.025037  444547 command_runner.go:130] > crio version 1.29.1
	I0819 18:51:36.025067  444547 command_runner.go:130] > Version:        1.29.1
	I0819 18:51:36.025076  444547 command_runner.go:130] > GitCommit:      unknown
	I0819 18:51:36.025082  444547 command_runner.go:130] > GitCommitDate:  unknown
	I0819 18:51:36.025088  444547 command_runner.go:130] > GitTreeState:   clean
	I0819 18:51:36.025097  444547 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 18:51:36.025103  444547 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 18:51:36.025108  444547 command_runner.go:130] > Compiler:       gc
	I0819 18:51:36.025115  444547 command_runner.go:130] > Platform:       linux/amd64
	I0819 18:51:36.025122  444547 command_runner.go:130] > Linkmode:       dynamic
	I0819 18:51:36.025137  444547 command_runner.go:130] > BuildTags:      
	I0819 18:51:36.025142  444547 command_runner.go:130] >   containers_image_ostree_stub
	I0819 18:51:36.025147  444547 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 18:51:36.025151  444547 command_runner.go:130] >   btrfs_noversion
	I0819 18:51:36.025156  444547 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 18:51:36.025161  444547 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 18:51:36.025169  444547 command_runner.go:130] >   seccomp
	I0819 18:51:36.025175  444547 command_runner.go:130] > LDFlags:          unknown
	I0819 18:51:36.025182  444547 command_runner.go:130] > SeccompEnabled:   true
	I0819 18:51:36.025187  444547 command_runner.go:130] > AppArmorEnabled:  false
	I0819 18:51:36.025256  444547 ssh_runner.go:195] Run: crio --version
	I0819 18:51:36.052216  444547 command_runner.go:130] > crio version 1.29.1
	I0819 18:51:36.052240  444547 command_runner.go:130] > Version:        1.29.1
	I0819 18:51:36.052247  444547 command_runner.go:130] > GitCommit:      unknown
	I0819 18:51:36.052252  444547 command_runner.go:130] > GitCommitDate:  unknown
	I0819 18:51:36.052256  444547 command_runner.go:130] > GitTreeState:   clean
	I0819 18:51:36.052261  444547 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 18:51:36.052266  444547 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 18:51:36.052270  444547 command_runner.go:130] > Compiler:       gc
	I0819 18:51:36.052282  444547 command_runner.go:130] > Platform:       linux/amd64
	I0819 18:51:36.052288  444547 command_runner.go:130] > Linkmode:       dynamic
	I0819 18:51:36.052294  444547 command_runner.go:130] > BuildTags:      
	I0819 18:51:36.052301  444547 command_runner.go:130] >   containers_image_ostree_stub
	I0819 18:51:36.052307  444547 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 18:51:36.052317  444547 command_runner.go:130] >   btrfs_noversion
	I0819 18:51:36.052324  444547 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 18:51:36.052333  444547 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 18:51:36.052338  444547 command_runner.go:130] >   seccomp
	I0819 18:51:36.052345  444547 command_runner.go:130] > LDFlags:          unknown
	I0819 18:51:36.052350  444547 command_runner.go:130] > SeccompEnabled:   true
	I0819 18:51:36.052356  444547 command_runner.go:130] > AppArmorEnabled:  false
	I0819 18:51:36.055292  444547 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:51:36.056598  444547 main.go:141] libmachine: (functional-124593) Calling .GetIP
	I0819 18:51:36.059532  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:51:36.059864  444547 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
	I0819 18:51:36.059895  444547 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
	I0819 18:51:36.060137  444547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:51:36.064416  444547 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0819 18:51:36.064570  444547 kubeadm.go:883] updating cluster {Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:51:36.064698  444547 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:51:36.064782  444547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:51:36.110239  444547 command_runner.go:130] > {
	I0819 18:51:36.110264  444547 command_runner.go:130] >   "images": [
	I0819 18:51:36.110268  444547 command_runner.go:130] >     {
	I0819 18:51:36.110277  444547 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 18:51:36.110281  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110287  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 18:51:36.110290  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110294  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110303  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 18:51:36.110310  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 18:51:36.110314  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110319  444547 command_runner.go:130] >       "size": "87165492",
	I0819 18:51:36.110324  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.110330  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110343  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110350  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110359  444547 command_runner.go:130] >     },
	I0819 18:51:36.110364  444547 command_runner.go:130] >     {
	I0819 18:51:36.110373  444547 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 18:51:36.110391  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110399  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 18:51:36.110402  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110406  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110414  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 18:51:36.110425  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 18:51:36.110432  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110443  444547 command_runner.go:130] >       "size": "31470524",
	I0819 18:51:36.110453  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.110461  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110468  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110477  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110483  444547 command_runner.go:130] >     },
	I0819 18:51:36.110502  444547 command_runner.go:130] >     {
	I0819 18:51:36.110513  444547 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 18:51:36.110522  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110533  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 18:51:36.110539  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110549  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110563  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 18:51:36.110577  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 18:51:36.110586  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110594  444547 command_runner.go:130] >       "size": "61245718",
	I0819 18:51:36.110601  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.110611  444547 command_runner.go:130] >       "username": "nonroot",
	I0819 18:51:36.110621  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110631  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110637  444547 command_runner.go:130] >     },
	I0819 18:51:36.110645  444547 command_runner.go:130] >     {
	I0819 18:51:36.110658  444547 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 18:51:36.110668  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110677  444547 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 18:51:36.110684  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110701  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110715  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 18:51:36.110733  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 18:51:36.110742  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110753  444547 command_runner.go:130] >       "size": "149009664",
	I0819 18:51:36.110760  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.110764  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.110770  444547 command_runner.go:130] >       },
	I0819 18:51:36.110777  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110787  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110797  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110805  444547 command_runner.go:130] >     },
	I0819 18:51:36.110814  444547 command_runner.go:130] >     {
	I0819 18:51:36.110823  444547 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 18:51:36.110832  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110842  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 18:51:36.110849  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110853  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.110868  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 18:51:36.110884  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 18:51:36.110893  444547 command_runner.go:130] >       ],
	I0819 18:51:36.110901  444547 command_runner.go:130] >       "size": "95233506",
	I0819 18:51:36.110909  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.110918  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.110927  444547 command_runner.go:130] >       },
	I0819 18:51:36.110934  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.110939  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.110947  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.110956  444547 command_runner.go:130] >     },
	I0819 18:51:36.110965  444547 command_runner.go:130] >     {
	I0819 18:51:36.110978  444547 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 18:51:36.110988  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.110999  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 18:51:36.111007  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111013  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111025  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 18:51:36.111040  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 18:51:36.111049  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111060  444547 command_runner.go:130] >       "size": "89437512",
	I0819 18:51:36.111070  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.111080  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.111089  444547 command_runner.go:130] >       },
	I0819 18:51:36.111096  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111104  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111114  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.111122  444547 command_runner.go:130] >     },
	I0819 18:51:36.111128  444547 command_runner.go:130] >     {
	I0819 18:51:36.111140  444547 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 18:51:36.111148  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.111154  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 18:51:36.111163  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111170  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111185  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 18:51:36.111199  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 18:51:36.111206  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111213  444547 command_runner.go:130] >       "size": "92728217",
	I0819 18:51:36.111223  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.111230  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111239  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111246  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.111254  444547 command_runner.go:130] >     },
	I0819 18:51:36.111267  444547 command_runner.go:130] >     {
	I0819 18:51:36.111281  444547 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 18:51:36.111290  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.111299  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 18:51:36.111307  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111313  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111333  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 18:51:36.111345  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 18:51:36.111351  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111355  444547 command_runner.go:130] >       "size": "68420936",
	I0819 18:51:36.111361  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.111365  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.111370  444547 command_runner.go:130] >       },
	I0819 18:51:36.111374  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111381  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111385  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.111389  444547 command_runner.go:130] >     },
	I0819 18:51:36.111393  444547 command_runner.go:130] >     {
	I0819 18:51:36.111399  444547 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 18:51:36.111405  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.111410  444547 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 18:51:36.111415  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111420  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.111429  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 18:51:36.111438  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 18:51:36.111442  444547 command_runner.go:130] >       ],
	I0819 18:51:36.111448  444547 command_runner.go:130] >       "size": "742080",
	I0819 18:51:36.111452  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.111456  444547 command_runner.go:130] >         "value": "65535"
	I0819 18:51:36.111460  444547 command_runner.go:130] >       },
	I0819 18:51:36.111464  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.111480  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.111486  444547 command_runner.go:130] >       "pinned": true
	I0819 18:51:36.111494  444547 command_runner.go:130] >     }
	I0819 18:51:36.111502  444547 command_runner.go:130] >   ]
	I0819 18:51:36.111507  444547 command_runner.go:130] > }
	I0819 18:51:36.111701  444547 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:51:36.111714  444547 crio.go:433] Images already preloaded, skipping extraction
	I0819 18:51:36.111767  444547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:51:36.143806  444547 command_runner.go:130] > {
	I0819 18:51:36.143831  444547 command_runner.go:130] >   "images": [
	I0819 18:51:36.143835  444547 command_runner.go:130] >     {
	I0819 18:51:36.143843  444547 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 18:51:36.143848  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.143854  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 18:51:36.143857  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143861  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.143870  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 18:51:36.143877  444547 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 18:51:36.143883  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143887  444547 command_runner.go:130] >       "size": "87165492",
	I0819 18:51:36.143891  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.143898  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.143904  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.143909  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.143912  444547 command_runner.go:130] >     },
	I0819 18:51:36.143916  444547 command_runner.go:130] >     {
	I0819 18:51:36.143922  444547 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 18:51:36.143929  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.143934  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 18:51:36.143939  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143943  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.143953  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 18:51:36.143960  444547 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 18:51:36.143967  444547 command_runner.go:130] >       ],
	I0819 18:51:36.143978  444547 command_runner.go:130] >       "size": "31470524",
	I0819 18:51:36.143984  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.143992  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144001  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144007  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144016  444547 command_runner.go:130] >     },
	I0819 18:51:36.144021  444547 command_runner.go:130] >     {
	I0819 18:51:36.144036  444547 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 18:51:36.144043  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144048  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 18:51:36.144054  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144058  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144067  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 18:51:36.144085  444547 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 18:51:36.144093  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144100  444547 command_runner.go:130] >       "size": "61245718",
	I0819 18:51:36.144109  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.144119  444547 command_runner.go:130] >       "username": "nonroot",
	I0819 18:51:36.144126  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144134  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144138  444547 command_runner.go:130] >     },
	I0819 18:51:36.144142  444547 command_runner.go:130] >     {
	I0819 18:51:36.144148  444547 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 18:51:36.144154  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144159  444547 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 18:51:36.144162  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144165  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144172  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 18:51:36.144188  444547 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 18:51:36.144197  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144204  444547 command_runner.go:130] >       "size": "149009664",
	I0819 18:51:36.144213  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144220  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144227  444547 command_runner.go:130] >       },
	I0819 18:51:36.144231  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144237  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144243  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144249  444547 command_runner.go:130] >     },
	I0819 18:51:36.144252  444547 command_runner.go:130] >     {
	I0819 18:51:36.144259  444547 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 18:51:36.144267  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144276  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 18:51:36.144285  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144291  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144305  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 18:51:36.144320  444547 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 18:51:36.144327  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144333  444547 command_runner.go:130] >       "size": "95233506",
	I0819 18:51:36.144337  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144341  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144347  444547 command_runner.go:130] >       },
	I0819 18:51:36.144352  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144358  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144365  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144374  444547 command_runner.go:130] >     },
	I0819 18:51:36.144380  444547 command_runner.go:130] >     {
	I0819 18:51:36.144389  444547 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 18:51:36.144399  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144408  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 18:51:36.144419  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144427  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144435  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 18:51:36.144449  444547 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 18:51:36.144471  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144501  444547 command_runner.go:130] >       "size": "89437512",
	I0819 18:51:36.144507  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144516  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144521  444547 command_runner.go:130] >       },
	I0819 18:51:36.144526  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144532  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144541  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144547  444547 command_runner.go:130] >     },
	I0819 18:51:36.144558  444547 command_runner.go:130] >     {
	I0819 18:51:36.144568  444547 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 18:51:36.144577  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144585  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 18:51:36.144593  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144600  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144611  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 18:51:36.144623  444547 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 18:51:36.144632  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144640  444547 command_runner.go:130] >       "size": "92728217",
	I0819 18:51:36.144649  444547 command_runner.go:130] >       "uid": null,
	I0819 18:51:36.144656  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144663  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144669  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144677  444547 command_runner.go:130] >     },
	I0819 18:51:36.144682  444547 command_runner.go:130] >     {
	I0819 18:51:36.144694  444547 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 18:51:36.144704  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144716  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 18:51:36.144725  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144734  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144755  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 18:51:36.144768  444547 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 18:51:36.144775  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144780  444547 command_runner.go:130] >       "size": "68420936",
	I0819 18:51:36.144789  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144798  444547 command_runner.go:130] >         "value": "0"
	I0819 18:51:36.144807  444547 command_runner.go:130] >       },
	I0819 18:51:36.144816  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144826  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144835  444547 command_runner.go:130] >       "pinned": false
	I0819 18:51:36.144843  444547 command_runner.go:130] >     },
	I0819 18:51:36.144849  444547 command_runner.go:130] >     {
	I0819 18:51:36.144864  444547 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 18:51:36.144873  444547 command_runner.go:130] >       "repoTags": [
	I0819 18:51:36.144882  444547 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 18:51:36.144892  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144901  444547 command_runner.go:130] >       "repoDigests": [
	I0819 18:51:36.144912  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 18:51:36.144926  444547 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 18:51:36.144934  444547 command_runner.go:130] >       ],
	I0819 18:51:36.144940  444547 command_runner.go:130] >       "size": "742080",
	I0819 18:51:36.144944  444547 command_runner.go:130] >       "uid": {
	I0819 18:51:36.144950  444547 command_runner.go:130] >         "value": "65535"
	I0819 18:51:36.144958  444547 command_runner.go:130] >       },
	I0819 18:51:36.144968  444547 command_runner.go:130] >       "username": "",
	I0819 18:51:36.144979  444547 command_runner.go:130] >       "spec": null,
	I0819 18:51:36.144988  444547 command_runner.go:130] >       "pinned": true
	I0819 18:51:36.144995  444547 command_runner.go:130] >     }
	I0819 18:51:36.145001  444547 command_runner.go:130] >   ]
	I0819 18:51:36.145008  444547 command_runner.go:130] > }
	I0819 18:51:36.145182  444547 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:51:36.145198  444547 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:51:36.145207  444547 kubeadm.go:934] updating node { 192.168.39.22 8441 v1.31.0 crio true true} ...
	I0819 18:51:36.145347  444547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-124593 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:51:36.145440  444547 ssh_runner.go:195] Run: crio config
	I0819 18:51:36.185689  444547 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0819 18:51:36.185722  444547 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0819 18:51:36.185733  444547 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0819 18:51:36.185738  444547 command_runner.go:130] > #
	I0819 18:51:36.185763  444547 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0819 18:51:36.185772  444547 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0819 18:51:36.185782  444547 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0819 18:51:36.185794  444547 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0819 18:51:36.185800  444547 command_runner.go:130] > # reload'.
	I0819 18:51:36.185810  444547 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0819 18:51:36.185824  444547 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0819 18:51:36.185834  444547 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0819 18:51:36.185851  444547 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0819 18:51:36.185857  444547 command_runner.go:130] > [crio]
	I0819 18:51:36.185867  444547 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0819 18:51:36.185878  444547 command_runner.go:130] > # containers images, in this directory.
	I0819 18:51:36.185886  444547 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0819 18:51:36.185906  444547 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0819 18:51:36.185916  444547 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0819 18:51:36.185927  444547 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0819 18:51:36.185937  444547 command_runner.go:130] > # imagestore = ""
	I0819 18:51:36.185947  444547 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0819 18:51:36.185960  444547 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0819 18:51:36.185968  444547 command_runner.go:130] > storage_driver = "overlay"
	I0819 18:51:36.185979  444547 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0819 18:51:36.185990  444547 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0819 18:51:36.186001  444547 command_runner.go:130] > storage_option = [
	I0819 18:51:36.186010  444547 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0819 18:51:36.186018  444547 command_runner.go:130] > ]
	I0819 18:51:36.186029  444547 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0819 18:51:36.186041  444547 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0819 18:51:36.186052  444547 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0819 18:51:36.186068  444547 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0819 18:51:36.186082  444547 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0819 18:51:36.186092  444547 command_runner.go:130] > # always happen on a node reboot
	I0819 18:51:36.186103  444547 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0819 18:51:36.186124  444547 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0819 18:51:36.186136  444547 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0819 18:51:36.186147  444547 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0819 18:51:36.186155  444547 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0819 18:51:36.186168  444547 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0819 18:51:36.186183  444547 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0819 18:51:36.186193  444547 command_runner.go:130] > # internal_wipe = true
	I0819 18:51:36.186206  444547 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0819 18:51:36.186217  444547 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0819 18:51:36.186227  444547 command_runner.go:130] > # internal_repair = false
	I0819 18:51:36.186235  444547 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0819 18:51:36.186247  444547 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0819 18:51:36.186256  444547 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0819 18:51:36.186268  444547 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0819 18:51:36.186303  444547 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0819 18:51:36.186317  444547 command_runner.go:130] > [crio.api]
	I0819 18:51:36.186326  444547 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0819 18:51:36.186333  444547 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0819 18:51:36.186342  444547 command_runner.go:130] > # IP address on which the stream server will listen.
	I0819 18:51:36.186353  444547 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0819 18:51:36.186363  444547 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0819 18:51:36.186374  444547 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0819 18:51:36.186386  444547 command_runner.go:130] > # stream_port = "0"
	I0819 18:51:36.186395  444547 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0819 18:51:36.186402  444547 command_runner.go:130] > # stream_enable_tls = false
	I0819 18:51:36.186409  444547 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0819 18:51:36.186418  444547 command_runner.go:130] > # stream_idle_timeout = ""
	I0819 18:51:36.186429  444547 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0819 18:51:36.186441  444547 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0819 18:51:36.186450  444547 command_runner.go:130] > # minutes.
	I0819 18:51:36.186457  444547 command_runner.go:130] > # stream_tls_cert = ""
	I0819 18:51:36.186468  444547 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0819 18:51:36.186486  444547 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0819 18:51:36.186498  444547 command_runner.go:130] > # stream_tls_key = ""
	I0819 18:51:36.186511  444547 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0819 18:51:36.186523  444547 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0819 18:51:36.186547  444547 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0819 18:51:36.186556  444547 command_runner.go:130] > # stream_tls_ca = ""
	I0819 18:51:36.186567  444547 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 18:51:36.186578  444547 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0819 18:51:36.186589  444547 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 18:51:36.186600  444547 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0819 18:51:36.186610  444547 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0819 18:51:36.186622  444547 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0819 18:51:36.186629  444547 command_runner.go:130] > [crio.runtime]
	I0819 18:51:36.186639  444547 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0819 18:51:36.186650  444547 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0819 18:51:36.186659  444547 command_runner.go:130] > # "nofile=1024:2048"
	I0819 18:51:36.186670  444547 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0819 18:51:36.186674  444547 command_runner.go:130] > # default_ulimits = [
	I0819 18:51:36.186678  444547 command_runner.go:130] > # ]
	I0819 18:51:36.186687  444547 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0819 18:51:36.186701  444547 command_runner.go:130] > # no_pivot = false
	I0819 18:51:36.186714  444547 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0819 18:51:36.186727  444547 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0819 18:51:36.186738  444547 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0819 18:51:36.186747  444547 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0819 18:51:36.186758  444547 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0819 18:51:36.186773  444547 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 18:51:36.186783  444547 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0819 18:51:36.186791  444547 command_runner.go:130] > # Cgroup setting for conmon
	I0819 18:51:36.186805  444547 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0819 18:51:36.186814  444547 command_runner.go:130] > conmon_cgroup = "pod"
	I0819 18:51:36.186824  444547 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0819 18:51:36.186834  444547 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0819 18:51:36.186845  444547 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 18:51:36.186855  444547 command_runner.go:130] > conmon_env = [
	I0819 18:51:36.186864  444547 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 18:51:36.186872  444547 command_runner.go:130] > ]
	I0819 18:51:36.186881  444547 command_runner.go:130] > # Additional environment variables to set for all the
	I0819 18:51:36.186891  444547 command_runner.go:130] > # containers. These are overridden if set in the
	I0819 18:51:36.186902  444547 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0819 18:51:36.186911  444547 command_runner.go:130] > # default_env = [
	I0819 18:51:36.186916  444547 command_runner.go:130] > # ]
	I0819 18:51:36.186957  444547 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0819 18:51:36.186977  444547 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0819 18:51:36.186983  444547 command_runner.go:130] > # selinux = false
	I0819 18:51:36.186992  444547 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0819 18:51:36.187004  444547 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0819 18:51:36.187019  444547 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0819 18:51:36.187029  444547 command_runner.go:130] > # seccomp_profile = ""
	I0819 18:51:36.187038  444547 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0819 18:51:36.187049  444547 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0819 18:51:36.187059  444547 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0819 18:51:36.187069  444547 command_runner.go:130] > # which might increase security.
	I0819 18:51:36.187074  444547 command_runner.go:130] > # This option is currently deprecated,
	I0819 18:51:36.187084  444547 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0819 18:51:36.187095  444547 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0819 18:51:36.187107  444547 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0819 18:51:36.187127  444547 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0819 18:51:36.187139  444547 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0819 18:51:36.187152  444547 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0819 18:51:36.187160  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.187167  444547 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0819 18:51:36.187178  444547 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0819 18:51:36.187188  444547 command_runner.go:130] > # the cgroup blockio controller.
	I0819 18:51:36.187200  444547 command_runner.go:130] > # blockio_config_file = ""
	I0819 18:51:36.187214  444547 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0819 18:51:36.187224  444547 command_runner.go:130] > # blockio parameters.
	I0819 18:51:36.187231  444547 command_runner.go:130] > # blockio_reload = false
	I0819 18:51:36.187241  444547 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0819 18:51:36.187250  444547 command_runner.go:130] > # irqbalance daemon.
	I0819 18:51:36.187259  444547 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0819 18:51:36.187271  444547 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0819 18:51:36.187285  444547 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0819 18:51:36.187297  444547 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0819 18:51:36.187309  444547 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0819 18:51:36.187322  444547 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0819 18:51:36.187332  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.187344  444547 command_runner.go:130] > # rdt_config_file = ""
	I0819 18:51:36.187353  444547 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0819 18:51:36.187363  444547 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0819 18:51:36.187390  444547 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0819 18:51:36.187400  444547 command_runner.go:130] > # separate_pull_cgroup = ""
	I0819 18:51:36.187410  444547 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0819 18:51:36.187425  444547 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0819 18:51:36.187435  444547 command_runner.go:130] > # will be added.
	I0819 18:51:36.187442  444547 command_runner.go:130] > # default_capabilities = [
	I0819 18:51:36.187451  444547 command_runner.go:130] > # 	"CHOWN",
	I0819 18:51:36.187458  444547 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0819 18:51:36.187466  444547 command_runner.go:130] > # 	"FSETID",
	I0819 18:51:36.187476  444547 command_runner.go:130] > # 	"FOWNER",
	I0819 18:51:36.187484  444547 command_runner.go:130] > # 	"SETGID",
	I0819 18:51:36.187490  444547 command_runner.go:130] > # 	"SETUID",
	I0819 18:51:36.187499  444547 command_runner.go:130] > # 	"SETPCAP",
	I0819 18:51:36.187506  444547 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0819 18:51:36.187516  444547 command_runner.go:130] > # 	"KILL",
	I0819 18:51:36.187521  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187536  444547 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0819 18:51:36.187549  444547 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0819 18:51:36.187564  444547 command_runner.go:130] > # add_inheritable_capabilities = false
	I0819 18:51:36.187577  444547 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0819 18:51:36.187588  444547 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 18:51:36.187595  444547 command_runner.go:130] > default_sysctls = [
	I0819 18:51:36.187599  444547 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0819 18:51:36.187602  444547 command_runner.go:130] > ]
	I0819 18:51:36.187607  444547 command_runner.go:130] > # List of devices on the host that a
	I0819 18:51:36.187613  444547 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0819 18:51:36.187617  444547 command_runner.go:130] > # allowed_devices = [
	I0819 18:51:36.187621  444547 command_runner.go:130] > # 	"/dev/fuse",
	I0819 18:51:36.187626  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187637  444547 command_runner.go:130] > # List of additional devices. specified as
	I0819 18:51:36.187650  444547 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0819 18:51:36.187663  444547 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0819 18:51:36.187675  444547 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 18:51:36.187685  444547 command_runner.go:130] > # additional_devices = [
	I0819 18:51:36.187690  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187699  444547 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0819 18:51:36.187703  444547 command_runner.go:130] > # cdi_spec_dirs = [
	I0819 18:51:36.187707  444547 command_runner.go:130] > # 	"/etc/cdi",
	I0819 18:51:36.187711  444547 command_runner.go:130] > # 	"/var/run/cdi",
	I0819 18:51:36.187715  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187721  444547 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0819 18:51:36.187729  444547 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0819 18:51:36.187735  444547 command_runner.go:130] > # Defaults to false.
	I0819 18:51:36.187739  444547 command_runner.go:130] > # device_ownership_from_security_context = false
	I0819 18:51:36.187746  444547 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0819 18:51:36.187753  444547 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0819 18:51:36.187756  444547 command_runner.go:130] > # hooks_dir = [
	I0819 18:51:36.187761  444547 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0819 18:51:36.187766  444547 command_runner.go:130] > # ]
	I0819 18:51:36.187775  444547 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0819 18:51:36.187788  444547 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0819 18:51:36.187800  444547 command_runner.go:130] > # its default mounts from the following two files:
	I0819 18:51:36.187808  444547 command_runner.go:130] > #
	I0819 18:51:36.187819  444547 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0819 18:51:36.187831  444547 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0819 18:51:36.187841  444547 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0819 18:51:36.187846  444547 command_runner.go:130] > #
	I0819 18:51:36.187856  444547 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0819 18:51:36.187870  444547 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0819 18:51:36.187887  444547 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0819 18:51:36.187899  444547 command_runner.go:130] > #      only add mounts it finds in this file.
	I0819 18:51:36.187907  444547 command_runner.go:130] > #
	I0819 18:51:36.187915  444547 command_runner.go:130] > # default_mounts_file = ""
	I0819 18:51:36.187927  444547 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0819 18:51:36.187940  444547 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0819 18:51:36.187948  444547 command_runner.go:130] > pids_limit = 1024
	I0819 18:51:36.187961  444547 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0819 18:51:36.187976  444547 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0819 18:51:36.187989  444547 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0819 18:51:36.188004  444547 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0819 18:51:36.188020  444547 command_runner.go:130] > # log_size_max = -1
	I0819 18:51:36.188034  444547 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0819 18:51:36.188043  444547 command_runner.go:130] > # log_to_journald = false
	I0819 18:51:36.188053  444547 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0819 18:51:36.188064  444547 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0819 18:51:36.188076  444547 command_runner.go:130] > # Path to directory for container attach sockets.
	I0819 18:51:36.188084  444547 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0819 18:51:36.188095  444547 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0819 18:51:36.188103  444547 command_runner.go:130] > # bind_mount_prefix = ""
	I0819 18:51:36.188113  444547 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0819 18:51:36.188123  444547 command_runner.go:130] > # read_only = false
	I0819 18:51:36.188133  444547 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0819 18:51:36.188144  444547 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0819 18:51:36.188151  444547 command_runner.go:130] > # live configuration reload.
	I0819 18:51:36.188161  444547 command_runner.go:130] > # log_level = "info"
	I0819 18:51:36.188171  444547 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0819 18:51:36.188182  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.188190  444547 command_runner.go:130] > # log_filter = ""
	I0819 18:51:36.188199  444547 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0819 18:51:36.188216  444547 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0819 18:51:36.188225  444547 command_runner.go:130] > # separated by comma.
	I0819 18:51:36.188237  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188247  444547 command_runner.go:130] > # uid_mappings = ""
	I0819 18:51:36.188257  444547 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0819 18:51:36.188269  444547 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0819 18:51:36.188278  444547 command_runner.go:130] > # separated by comma.
	I0819 18:51:36.188293  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188303  444547 command_runner.go:130] > # gid_mappings = ""
	I0819 18:51:36.188313  444547 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0819 18:51:36.188325  444547 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 18:51:36.188337  444547 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 18:51:36.188351  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188359  444547 command_runner.go:130] > # minimum_mappable_uid = -1
	I0819 18:51:36.188366  444547 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0819 18:51:36.188375  444547 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 18:51:36.188381  444547 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 18:51:36.188390  444547 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:51:36.188394  444547 command_runner.go:130] > # minimum_mappable_gid = -1
	I0819 18:51:36.188402  444547 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0819 18:51:36.188408  444547 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0819 18:51:36.188415  444547 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0819 18:51:36.188419  444547 command_runner.go:130] > # ctr_stop_timeout = 30
	I0819 18:51:36.188424  444547 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0819 18:51:36.188430  444547 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0819 18:51:36.188437  444547 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0819 18:51:36.188441  444547 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0819 18:51:36.188445  444547 command_runner.go:130] > drop_infra_ctr = false
	I0819 18:51:36.188451  444547 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0819 18:51:36.188458  444547 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0819 18:51:36.188465  444547 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0819 18:51:36.188471  444547 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0819 18:51:36.188482  444547 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0819 18:51:36.188489  444547 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0819 18:51:36.188495  444547 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0819 18:51:36.188502  444547 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0819 18:51:36.188506  444547 command_runner.go:130] > # shared_cpuset = ""
	I0819 18:51:36.188514  444547 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0819 18:51:36.188519  444547 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0819 18:51:36.188524  444547 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0819 18:51:36.188531  444547 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0819 18:51:36.188537  444547 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0819 18:51:36.188549  444547 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0819 18:51:36.188561  444547 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0819 18:51:36.188571  444547 command_runner.go:130] > # enable_criu_support = false
	I0819 18:51:36.188579  444547 command_runner.go:130] > # Enable/disable the generation of the container,
	I0819 18:51:36.188591  444547 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0819 18:51:36.188598  444547 command_runner.go:130] > # enable_pod_events = false
	I0819 18:51:36.188604  444547 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 18:51:36.188613  444547 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 18:51:36.188620  444547 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0819 18:51:36.188626  444547 command_runner.go:130] > # default_runtime = "runc"
	I0819 18:51:36.188631  444547 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0819 18:51:36.188638  444547 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0819 18:51:36.188649  444547 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0819 18:51:36.188656  444547 command_runner.go:130] > # creation as a file is not desired either.
	I0819 18:51:36.188664  444547 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0819 18:51:36.188671  444547 command_runner.go:130] > # the hostname is being managed dynamically.
	I0819 18:51:36.188675  444547 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0819 18:51:36.188681  444547 command_runner.go:130] > # ]
	I0819 18:51:36.188686  444547 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0819 18:51:36.188694  444547 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0819 18:51:36.188700  444547 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0819 18:51:36.188708  444547 command_runner.go:130] > # Each entry in the table should follow the format:
	I0819 18:51:36.188711  444547 command_runner.go:130] > #
	I0819 18:51:36.188716  444547 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0819 18:51:36.188720  444547 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0819 18:51:36.188744  444547 command_runner.go:130] > # runtime_type = "oci"
	I0819 18:51:36.188752  444547 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0819 18:51:36.188757  444547 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0819 18:51:36.188763  444547 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0819 18:51:36.188768  444547 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0819 18:51:36.188774  444547 command_runner.go:130] > # monitor_env = []
	I0819 18:51:36.188778  444547 command_runner.go:130] > # privileged_without_host_devices = false
	I0819 18:51:36.188782  444547 command_runner.go:130] > # allowed_annotations = []
	I0819 18:51:36.188790  444547 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0819 18:51:36.188795  444547 command_runner.go:130] > # Where:
	I0819 18:51:36.188800  444547 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0819 18:51:36.188806  444547 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0819 18:51:36.188813  444547 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0819 18:51:36.188822  444547 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0819 18:51:36.188828  444547 command_runner.go:130] > #   in $PATH.
	I0819 18:51:36.188834  444547 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0819 18:51:36.188839  444547 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0819 18:51:36.188845  444547 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0819 18:51:36.188851  444547 command_runner.go:130] > #   state.
	I0819 18:51:36.188858  444547 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0819 18:51:36.188865  444547 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0819 18:51:36.188871  444547 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0819 18:51:36.188879  444547 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0819 18:51:36.188885  444547 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0819 18:51:36.188893  444547 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0819 18:51:36.188898  444547 command_runner.go:130] > #   The currently recognized values are:
	I0819 18:51:36.188904  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0819 18:51:36.188911  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0819 18:51:36.188917  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0819 18:51:36.188925  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0819 18:51:36.188934  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0819 18:51:36.188940  444547 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0819 18:51:36.188948  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0819 18:51:36.188954  444547 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0819 18:51:36.188962  444547 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0819 18:51:36.188968  444547 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0819 18:51:36.188972  444547 command_runner.go:130] > #   deprecated option "conmon".
	I0819 18:51:36.188979  444547 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0819 18:51:36.188985  444547 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0819 18:51:36.188992  444547 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0819 18:51:36.188998  444547 command_runner.go:130] > #   should be moved to the container's cgroup
	I0819 18:51:36.189006  444547 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0819 18:51:36.189013  444547 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0819 18:51:36.189019  444547 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0819 18:51:36.189026  444547 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0819 18:51:36.189031  444547 command_runner.go:130] > #
	I0819 18:51:36.189041  444547 command_runner.go:130] > # Using the seccomp notifier feature:
	I0819 18:51:36.189044  444547 command_runner.go:130] > #
	I0819 18:51:36.189051  444547 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0819 18:51:36.189058  444547 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0819 18:51:36.189062  444547 command_runner.go:130] > #
	I0819 18:51:36.189070  444547 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0819 18:51:36.189078  444547 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0819 18:51:36.189082  444547 command_runner.go:130] > #
	I0819 18:51:36.189089  444547 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0819 18:51:36.189095  444547 command_runner.go:130] > # feature.
	I0819 18:51:36.189100  444547 command_runner.go:130] > #
	I0819 18:51:36.189106  444547 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0819 18:51:36.189114  444547 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0819 18:51:36.189120  444547 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0819 18:51:36.189127  444547 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0819 18:51:36.189146  444547 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0819 18:51:36.189154  444547 command_runner.go:130] > #
	I0819 18:51:36.189163  444547 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0819 18:51:36.189174  444547 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0819 18:51:36.189178  444547 command_runner.go:130] > #
	I0819 18:51:36.189184  444547 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0819 18:51:36.189192  444547 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0819 18:51:36.189195  444547 command_runner.go:130] > #
	I0819 18:51:36.189203  444547 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0819 18:51:36.189209  444547 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0819 18:51:36.189214  444547 command_runner.go:130] > # limitation.
	I0819 18:51:36.189220  444547 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0819 18:51:36.189226  444547 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0819 18:51:36.189230  444547 command_runner.go:130] > runtime_type = "oci"
	I0819 18:51:36.189234  444547 command_runner.go:130] > runtime_root = "/run/runc"
	I0819 18:51:36.189240  444547 command_runner.go:130] > runtime_config_path = ""
	I0819 18:51:36.189244  444547 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0819 18:51:36.189248  444547 command_runner.go:130] > monitor_cgroup = "pod"
	I0819 18:51:36.189252  444547 command_runner.go:130] > monitor_exec_cgroup = ""
	I0819 18:51:36.189256  444547 command_runner.go:130] > monitor_env = [
	I0819 18:51:36.189261  444547 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 18:51:36.189266  444547 command_runner.go:130] > ]
	I0819 18:51:36.189270  444547 command_runner.go:130] > privileged_without_host_devices = false
	I0819 18:51:36.189278  444547 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0819 18:51:36.189283  444547 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0819 18:51:36.189291  444547 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0819 18:51:36.189302  444547 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0819 18:51:36.189311  444547 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0819 18:51:36.189317  444547 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0819 18:51:36.189328  444547 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0819 18:51:36.189339  444547 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0819 18:51:36.189346  444547 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0819 18:51:36.189353  444547 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0819 18:51:36.189358  444547 command_runner.go:130] > # Example:
	I0819 18:51:36.189363  444547 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0819 18:51:36.189370  444547 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0819 18:51:36.189374  444547 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0819 18:51:36.189382  444547 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0819 18:51:36.189386  444547 command_runner.go:130] > # cpuset = 0
	I0819 18:51:36.189393  444547 command_runner.go:130] > # cpushares = "0-1"
	I0819 18:51:36.189396  444547 command_runner.go:130] > # Where:
	I0819 18:51:36.189401  444547 command_runner.go:130] > # The workload name is workload-type.
	I0819 18:51:36.189409  444547 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0819 18:51:36.189415  444547 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0819 18:51:36.189422  444547 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0819 18:51:36.189430  444547 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0819 18:51:36.189437  444547 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0819 18:51:36.189442  444547 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0819 18:51:36.189449  444547 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0819 18:51:36.189455  444547 command_runner.go:130] > # Default value is set to true
	I0819 18:51:36.189459  444547 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0819 18:51:36.189469  444547 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0819 18:51:36.189478  444547 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0819 18:51:36.189484  444547 command_runner.go:130] > # Default value is set to 'false'
	I0819 18:51:36.189489  444547 command_runner.go:130] > # disable_hostport_mapping = false
	I0819 18:51:36.189497  444547 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0819 18:51:36.189500  444547 command_runner.go:130] > #
	I0819 18:51:36.189505  444547 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0819 18:51:36.189513  444547 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0819 18:51:36.189519  444547 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0819 18:51:36.189528  444547 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0819 18:51:36.189536  444547 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0819 18:51:36.189542  444547 command_runner.go:130] > [crio.image]
	I0819 18:51:36.189548  444547 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0819 18:51:36.189554  444547 command_runner.go:130] > # default_transport = "docker://"
	I0819 18:51:36.189560  444547 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0819 18:51:36.189569  444547 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0819 18:51:36.189574  444547 command_runner.go:130] > # global_auth_file = ""
	I0819 18:51:36.189578  444547 command_runner.go:130] > # The image used to instantiate infra containers.
	I0819 18:51:36.189583  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.189590  444547 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0819 18:51:36.189596  444547 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0819 18:51:36.189604  444547 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0819 18:51:36.189609  444547 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:51:36.189615  444547 command_runner.go:130] > # pause_image_auth_file = ""
	I0819 18:51:36.189620  444547 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0819 18:51:36.189626  444547 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0819 18:51:36.189632  444547 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0819 18:51:36.189639  444547 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0819 18:51:36.189643  444547 command_runner.go:130] > # pause_command = "/pause"
	I0819 18:51:36.189649  444547 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0819 18:51:36.189655  444547 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0819 18:51:36.189660  444547 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0819 18:51:36.189670  444547 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0819 18:51:36.189678  444547 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0819 18:51:36.189684  444547 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0819 18:51:36.189690  444547 command_runner.go:130] > # pinned_images = [
	I0819 18:51:36.189693  444547 command_runner.go:130] > # ]
	I0819 18:51:36.189700  444547 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0819 18:51:36.189707  444547 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0819 18:51:36.189713  444547 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0819 18:51:36.189721  444547 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0819 18:51:36.189726  444547 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0819 18:51:36.189732  444547 command_runner.go:130] > # signature_policy = ""
	I0819 18:51:36.189737  444547 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0819 18:51:36.189744  444547 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0819 18:51:36.189754  444547 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0819 18:51:36.189762  444547 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0819 18:51:36.189770  444547 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0819 18:51:36.189775  444547 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0819 18:51:36.189781  444547 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0819 18:51:36.189786  444547 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0819 18:51:36.189791  444547 command_runner.go:130] > # changing them here.
	I0819 18:51:36.189795  444547 command_runner.go:130] > # insecure_registries = [
	I0819 18:51:36.189798  444547 command_runner.go:130] > # ]
	I0819 18:51:36.189804  444547 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0819 18:51:36.189808  444547 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0819 18:51:36.189812  444547 command_runner.go:130] > # image_volumes = "mkdir"
	I0819 18:51:36.189816  444547 command_runner.go:130] > # Temporary directory to use for storing big files
	I0819 18:51:36.189820  444547 command_runner.go:130] > # big_files_temporary_dir = ""
	I0819 18:51:36.189826  444547 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0819 18:51:36.189829  444547 command_runner.go:130] > # CNI plugins.
	I0819 18:51:36.189832  444547 command_runner.go:130] > [crio.network]
	I0819 18:51:36.189838  444547 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0819 18:51:36.189842  444547 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0819 18:51:36.189847  444547 command_runner.go:130] > # cni_default_network = ""
	I0819 18:51:36.189851  444547 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0819 18:51:36.189855  444547 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0819 18:51:36.189860  444547 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0819 18:51:36.189863  444547 command_runner.go:130] > # plugin_dirs = [
	I0819 18:51:36.189867  444547 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0819 18:51:36.189870  444547 command_runner.go:130] > # ]
	I0819 18:51:36.189875  444547 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0819 18:51:36.189879  444547 command_runner.go:130] > [crio.metrics]
	I0819 18:51:36.189883  444547 command_runner.go:130] > # Globally enable or disable metrics support.
	I0819 18:51:36.189887  444547 command_runner.go:130] > enable_metrics = true
	I0819 18:51:36.189891  444547 command_runner.go:130] > # Specify enabled metrics collectors.
	I0819 18:51:36.189895  444547 command_runner.go:130] > # Per default all metrics are enabled.
	I0819 18:51:36.189900  444547 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0819 18:51:36.189906  444547 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0819 18:51:36.189911  444547 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0819 18:51:36.189915  444547 command_runner.go:130] > # metrics_collectors = [
	I0819 18:51:36.189918  444547 command_runner.go:130] > # 	"operations",
	I0819 18:51:36.189923  444547 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0819 18:51:36.189927  444547 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0819 18:51:36.189931  444547 command_runner.go:130] > # 	"operations_errors",
	I0819 18:51:36.189935  444547 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0819 18:51:36.189938  444547 command_runner.go:130] > # 	"image_pulls_by_name",
	I0819 18:51:36.189946  444547 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0819 18:51:36.189950  444547 command_runner.go:130] > # 	"image_pulls_failures",
	I0819 18:51:36.189954  444547 command_runner.go:130] > # 	"image_pulls_successes",
	I0819 18:51:36.189958  444547 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0819 18:51:36.189962  444547 command_runner.go:130] > # 	"image_layer_reuse",
	I0819 18:51:36.189970  444547 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0819 18:51:36.189973  444547 command_runner.go:130] > # 	"containers_oom_total",
	I0819 18:51:36.189977  444547 command_runner.go:130] > # 	"containers_oom",
	I0819 18:51:36.189980  444547 command_runner.go:130] > # 	"processes_defunct",
	I0819 18:51:36.189984  444547 command_runner.go:130] > # 	"operations_total",
	I0819 18:51:36.189988  444547 command_runner.go:130] > # 	"operations_latency_seconds",
	I0819 18:51:36.189993  444547 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0819 18:51:36.189997  444547 command_runner.go:130] > # 	"operations_errors_total",
	I0819 18:51:36.190001  444547 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0819 18:51:36.190005  444547 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0819 18:51:36.190009  444547 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0819 18:51:36.190013  444547 command_runner.go:130] > # 	"image_pulls_success_total",
	I0819 18:51:36.190017  444547 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0819 18:51:36.190021  444547 command_runner.go:130] > # 	"containers_oom_count_total",
	I0819 18:51:36.190026  444547 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0819 18:51:36.190033  444547 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0819 18:51:36.190035  444547 command_runner.go:130] > # ]
	I0819 18:51:36.190040  444547 command_runner.go:130] > # The port on which the metrics server will listen.
	I0819 18:51:36.190046  444547 command_runner.go:130] > # metrics_port = 9090
	I0819 18:51:36.190051  444547 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0819 18:51:36.190055  444547 command_runner.go:130] > # metrics_socket = ""
	I0819 18:51:36.190061  444547 command_runner.go:130] > # The certificate for the secure metrics server.
	I0819 18:51:36.190069  444547 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0819 18:51:36.190075  444547 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0819 18:51:36.190082  444547 command_runner.go:130] > # certificate on any modification event.
	I0819 18:51:36.190085  444547 command_runner.go:130] > # metrics_cert = ""
	I0819 18:51:36.190090  444547 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0819 18:51:36.190097  444547 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0819 18:51:36.190101  444547 command_runner.go:130] > # metrics_key = ""
	I0819 18:51:36.190106  444547 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0819 18:51:36.190110  444547 command_runner.go:130] > [crio.tracing]
	I0819 18:51:36.190117  444547 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0819 18:51:36.190124  444547 command_runner.go:130] > # enable_tracing = false
	I0819 18:51:36.190129  444547 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0819 18:51:36.190135  444547 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0819 18:51:36.190142  444547 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0819 18:51:36.190147  444547 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0819 18:51:36.190151  444547 command_runner.go:130] > # CRI-O NRI configuration.
	I0819 18:51:36.190154  444547 command_runner.go:130] > [crio.nri]
	I0819 18:51:36.190158  444547 command_runner.go:130] > # Globally enable or disable NRI.
	I0819 18:51:36.190167  444547 command_runner.go:130] > # enable_nri = false
	I0819 18:51:36.190172  444547 command_runner.go:130] > # NRI socket to listen on.
	I0819 18:51:36.190177  444547 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0819 18:51:36.190183  444547 command_runner.go:130] > # NRI plugin directory to use.
	I0819 18:51:36.190188  444547 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0819 18:51:36.190194  444547 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0819 18:51:36.190198  444547 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0819 18:51:36.190205  444547 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0819 18:51:36.190209  444547 command_runner.go:130] > # nri_disable_connections = false
	I0819 18:51:36.190217  444547 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0819 18:51:36.190221  444547 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0819 18:51:36.190228  444547 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0819 18:51:36.190233  444547 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0819 18:51:36.190238  444547 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0819 18:51:36.190243  444547 command_runner.go:130] > [crio.stats]
	I0819 18:51:36.190249  444547 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0819 18:51:36.190255  444547 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0819 18:51:36.190259  444547 command_runner.go:130] > # stats_collection_period = 0
	I0819 18:51:36.190450  444547 command_runner.go:130] ! time="2024-08-19 18:51:36.161529726Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0819 18:51:36.190501  444547 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0819 18:51:36.190630  444547 cni.go:84] Creating CNI manager for ""
	I0819 18:51:36.190641  444547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:51:36.190651  444547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:51:36.190674  444547 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.22 APIServerPort:8441 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-124593 NodeName:functional-124593 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:51:36.190815  444547 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.22
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-124593"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:51:36.190886  444547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:51:36.200955  444547 command_runner.go:130] > kubeadm
	I0819 18:51:36.200981  444547 command_runner.go:130] > kubectl
	I0819 18:51:36.200986  444547 command_runner.go:130] > kubelet
	I0819 18:51:36.201016  444547 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:51:36.201072  444547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 18:51:36.211041  444547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0819 18:51:36.228264  444547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:51:36.245722  444547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0819 18:51:36.263018  444547 ssh_runner.go:195] Run: grep 192.168.39.22	control-plane.minikube.internal$ /etc/hosts
	I0819 18:51:36.267130  444547 command_runner.go:130] > 192.168.39.22	control-plane.minikube.internal
	I0819 18:51:36.267229  444547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:51:36.398107  444547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:51:36.412895  444547 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593 for IP: 192.168.39.22
	I0819 18:51:36.412924  444547 certs.go:194] generating shared ca certs ...
	I0819 18:51:36.412943  444547 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:51:36.413154  444547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 18:51:36.413203  444547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 18:51:36.413217  444547 certs.go:256] generating profile certs ...
	I0819 18:51:36.413317  444547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.key
	I0819 18:51:36.413414  444547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.key.aa5a99d1
	I0819 18:51:36.413463  444547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.key
	I0819 18:51:36.413478  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 18:51:36.413496  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 18:51:36.413514  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 18:51:36.413543  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 18:51:36.413558  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 18:51:36.413577  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 18:51:36.413596  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 18:51:36.413612  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 18:51:36.413684  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 18:51:36.413728  444547 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 18:51:36.413741  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 18:51:36.413782  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 18:51:36.413816  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:51:36.413853  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 18:51:36.413906  444547 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 18:51:36.413944  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.413964  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem -> /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.413981  444547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.414774  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:51:36.439176  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:51:36.463796  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:51:36.490998  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 18:51:36.514746  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 18:51:36.538661  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 18:51:36.562630  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:51:36.586739  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 18:51:36.610889  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:51:36.634562  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 18:51:36.658286  444547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 18:51:36.681715  444547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:51:36.698451  444547 ssh_runner.go:195] Run: openssl version
	I0819 18:51:36.704220  444547 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0819 18:51:36.704339  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 18:51:36.715389  444547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.720025  444547 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.720080  444547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.720142  444547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 18:51:36.725901  444547 command_runner.go:130] > 51391683
	I0819 18:51:36.726015  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 18:51:36.736206  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 18:51:36.747737  444547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.752558  444547 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.752599  444547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.752642  444547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 18:51:36.758223  444547 command_runner.go:130] > 3ec20f2e
	I0819 18:51:36.758300  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:51:36.767946  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:51:36.779143  444547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.783850  444547 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.783902  444547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.783950  444547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:51:36.789800  444547 command_runner.go:130] > b5213941
	I0819 18:51:36.789894  444547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:51:36.799700  444547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:51:36.804144  444547 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:51:36.804180  444547 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0819 18:51:36.804188  444547 command_runner.go:130] > Device: 253,1	Inode: 4197398     Links: 1
	I0819 18:51:36.804194  444547 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 18:51:36.804201  444547 command_runner.go:130] > Access: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804206  444547 command_runner.go:130] > Modify: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804217  444547 command_runner.go:130] > Change: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804222  444547 command_runner.go:130] >  Birth: 2024-08-19 18:49:33.961834684 +0000
	I0819 18:51:36.804284  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 18:51:36.810230  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.810339  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 18:51:36.816159  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.816241  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 18:51:36.821909  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.822019  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 18:51:36.827758  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.827847  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 18:51:36.833329  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.833420  444547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 18:51:36.838995  444547 command_runner.go:130] > Certificate will not expire
	I0819 18:51:36.839152  444547 kubeadm.go:392] StartCluster: {Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:51:36.839251  444547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:51:36.839310  444547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:51:36.874453  444547 command_runner.go:130] > e908c3e4c32bde71b6441e3ee7b44b5559f7c82f0a6a46e750222d6420ee5768
	I0819 18:51:36.874803  444547 command_runner.go:130] > 790104913a298ce37e7cba7691f8e4d617c4a8638cb821e244b70555efd66fbf
	I0819 18:51:36.874823  444547 command_runner.go:130] > aa84abbb4c9f6505401955b079afdb2f11c046f6879ee08a957007aaca875b03
	I0819 18:51:36.874834  444547 command_runner.go:130] > d8bfd36fbe96397e082982d91cabf0e8c731c94d88ed6c685f08a18b2da4cc6c
	I0819 18:51:36.874843  444547 command_runner.go:130] > e4040e2a8622453f1ef2fd04190fd96bea2fd3de919c6b06deaa38e83d38cf5b
	I0819 18:51:36.874899  444547 command_runner.go:130] > 8df7ac76fe134c848ddad74ca75f51e5c19afffdbd0712cb3d74333cdc6b91dc
	I0819 18:51:36.875009  444547 command_runner.go:130] > 94e0fe7cb19015bd08c7406824ba63263d67eec22f1240ef0f0eaff258976b4f
	I0819 18:51:36.875035  444547 command_runner.go:130] > 871ee5a26fc4d8fe6f04e66a22c78ba6e6b80077bd1066ab3a3c1acb639de113
	I0819 18:51:36.875045  444547 command_runner.go:130] > 70ce15fbc3bc32cd55767aab9cde75fc9d6f452d9c22c53034e5c2af14442b32
	I0819 18:51:36.875236  444547 command_runner.go:130] > 7703464bfd87d33565890b044096dcd7f50a8b1340ec99f2a88431946c69fc1b
	I0819 18:51:36.875268  444547 command_runner.go:130] > d65fc62d1475e4da38a8c95d6b13d520659291d23ac972a2d197d382a7fa6027
	I0819 18:51:36.875360  444547 command_runner.go:130] > d89c8be1ccfda0fbaa81ee519dc2e56af6349b6060b1afd9b3ac512310edc348
	I0819 18:51:36.875408  444547 command_runner.go:130] > e38111b143b726d6b06442db0379d3ecea609946581cb76904945ed68a359c74
	I0819 18:51:36.876958  444547 cri.go:89] found id: "e908c3e4c32bde71b6441e3ee7b44b5559f7c82f0a6a46e750222d6420ee5768"
	I0819 18:51:36.876978  444547 cri.go:89] found id: "790104913a298ce37e7cba7691f8e4d617c4a8638cb821e244b70555efd66fbf"
	I0819 18:51:36.876984  444547 cri.go:89] found id: "aa84abbb4c9f6505401955b079afdb2f11c046f6879ee08a957007aaca875b03"
	I0819 18:51:36.876989  444547 cri.go:89] found id: "d8bfd36fbe96397e082982d91cabf0e8c731c94d88ed6c685f08a18b2da4cc6c"
	I0819 18:51:36.876993  444547 cri.go:89] found id: "e4040e2a8622453f1ef2fd04190fd96bea2fd3de919c6b06deaa38e83d38cf5b"
	I0819 18:51:36.876998  444547 cri.go:89] found id: "8df7ac76fe134c848ddad74ca75f51e5c19afffdbd0712cb3d74333cdc6b91dc"
	I0819 18:51:36.877002  444547 cri.go:89] found id: "94e0fe7cb19015bd08c7406824ba63263d67eec22f1240ef0f0eaff258976b4f"
	I0819 18:51:36.877006  444547 cri.go:89] found id: "871ee5a26fc4d8fe6f04e66a22c78ba6e6b80077bd1066ab3a3c1acb639de113"
	I0819 18:51:36.877010  444547 cri.go:89] found id: "70ce15fbc3bc32cd55767aab9cde75fc9d6f452d9c22c53034e5c2af14442b32"
	I0819 18:51:36.877024  444547 cri.go:89] found id: "7703464bfd87d33565890b044096dcd7f50a8b1340ec99f2a88431946c69fc1b"
	I0819 18:51:36.877032  444547 cri.go:89] found id: "d65fc62d1475e4da38a8c95d6b13d520659291d23ac972a2d197d382a7fa6027"
	I0819 18:51:36.877036  444547 cri.go:89] found id: "d89c8be1ccfda0fbaa81ee519dc2e56af6349b6060b1afd9b3ac512310edc348"
	I0819 18:51:36.877040  444547 cri.go:89] found id: "e38111b143b726d6b06442db0379d3ecea609946581cb76904945ed68a359c74"
	I0819 18:51:36.877044  444547 cri.go:89] found id: ""
	I0819 18:51:36.877087  444547 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.308196037Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094242308172625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aba75781-7809-42e7-ab5c-f771349342b1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.308646728Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e123a73-71b9-4785-a42b-0ba56f20b3d0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.308693279Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e123a73-71b9-4785-a42b-0ba56f20b3d0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.308781887Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329128126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restart
Count: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d,PodSandboxId:59013506b9174ea339d22d4e1595ce75d5914ac9c880e68a9b739018f586ec2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094163327692056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15de45e6effb382c12ca8494f33bff76,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 15,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca,PodSandboxId:ddca0e39cb48d9da271cf14439314e1118baec86bc841a867aeb3fe42df61ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724093988918818534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1e123a73-71b9-4785-a42b-0ba56f20b3d0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.341020394Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ddb7c607-9829-4a33-9274-7aff00cd5430 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.341094868Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ddb7c607-9829-4a33-9274-7aff00cd5430 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.342356338Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2cbb1137-c160-4b82-8308-7b720ec04dac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.343124165Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094242343094714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2cbb1137-c160-4b82-8308-7b720ec04dac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.343702606Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46f56664-c8ac-4914-a87b-70cb236119c0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.343755356Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46f56664-c8ac-4914-a87b-70cb236119c0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.343838772Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329128126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restart
Count: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d,PodSandboxId:59013506b9174ea339d22d4e1595ce75d5914ac9c880e68a9b739018f586ec2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094163327692056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15de45e6effb382c12ca8494f33bff76,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 15,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca,PodSandboxId:ddca0e39cb48d9da271cf14439314e1118baec86bc841a867aeb3fe42df61ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724093988918818534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46f56664-c8ac-4914-a87b-70cb236119c0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.381067181Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36ca0e5d-1224-4f81-a1ba-c48d3e7afb1d name=/runtime.v1.RuntimeService/Version
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.381139187Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36ca0e5d-1224-4f81-a1ba-c48d3e7afb1d name=/runtime.v1.RuntimeService/Version
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.382702783Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=855984f8-3434-405a-a34a-a10ff15f8154 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.383170428Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094242383146749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=855984f8-3434-405a-a34a-a10ff15f8154 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.383911504Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf679cad-a2f9-43ed-bc79-b202ea9552ed name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.384251387Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf679cad-a2f9-43ed-bc79-b202ea9552ed name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.384568599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329128126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restart
Count: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d,PodSandboxId:59013506b9174ea339d22d4e1595ce75d5914ac9c880e68a9b739018f586ec2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094163327692056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15de45e6effb382c12ca8494f33bff76,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 15,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca,PodSandboxId:ddca0e39cb48d9da271cf14439314e1118baec86bc841a867aeb3fe42df61ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724093988918818534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf679cad-a2f9-43ed-bc79-b202ea9552ed name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.423889935Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=623ff464-ca6b-4fac-b096-5c0fd90ea160 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.423990717Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=623ff464-ca6b-4fac-b096-5c0fd90ea160 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.425143309Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f4f99bee-8fe9-43b7-a558-e2fb04193114 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.425710745Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094242425685319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4f99bee-8fe9-43b7-a558-e2fb04193114 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.426228161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=688d05e2-4180-4f05-9c41-049c0817306a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.426295050Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=688d05e2-4180-4f05-9c41-049c0817306a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:04:02 functional-124593 crio[3397]: time="2024-08-19 19:04:02.426386699Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329128126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restart
Count: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d,PodSandboxId:59013506b9174ea339d22d4e1595ce75d5914ac9c880e68a9b739018f586ec2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094163327692056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15de45e6effb382c12ca8494f33bff76,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 15,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca,PodSandboxId:ddca0e39cb48d9da271cf14439314e1118baec86bc841a867aeb3fe42df61ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724093988918818534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=688d05e2-4180-4f05-9c41-049c0817306a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e764198234f75       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   About a minute ago   Exited              kube-controller-manager   15                  1b98c8cb37fd8       kube-controller-manager-functional-124593
	effebbec1cbf2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   About a minute ago   Exited              kube-apiserver            15                  59013506b9174       kube-apiserver-functional-124593
	e3ddc8f73f9e6       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   4 minutes ago        Running             kube-scheduler            4                   ddca0e39cb48d       kube-scheduler-functional-124593
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.066009] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.197712] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.124470] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.281624] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +4.011758] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +4.140421] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +0.057144] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.989777] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	[  +0.082461] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.724179] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +0.114451] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.497778] kauditd_printk_skb: 98 callbacks suppressed
	[Aug19 18:50] systemd-fstab-generator[3042]: Ignoring "noauto" option for root device
	[  +0.214760] systemd-fstab-generator[3119]: Ignoring "noauto" option for root device
	[  +0.240969] systemd-fstab-generator[3138]: Ignoring "noauto" option for root device
	[  +0.217290] systemd-fstab-generator[3183]: Ignoring "noauto" option for root device
	[  +0.371422] systemd-fstab-generator[3242]: Ignoring "noauto" option for root device
	[Aug19 18:51] systemd-fstab-generator[3507]: Ignoring "noauto" option for root device
	[  +0.085616] kauditd_printk_skb: 184 callbacks suppressed
	[  +1.984129] systemd-fstab-generator[3627]: Ignoring "noauto" option for root device
	[Aug19 18:52] kauditd_printk_skb: 81 callbacks suppressed
	[Aug19 18:55] systemd-fstab-generator[9158]: Ignoring "noauto" option for root device
	[Aug19 18:56] kauditd_printk_skb: 70 callbacks suppressed
	[Aug19 18:59] systemd-fstab-generator[10102]: Ignoring "noauto" option for root device
	[Aug19 19:00] kauditd_printk_skb: 54 callbacks suppressed
	
	
	==> kernel <==
	 19:04:02 up 14 min,  0 users,  load average: 0.09, 0.14, 0.10
	Linux functional-124593 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d] <==
	I0819 19:02:43.467753       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0819 19:02:43.743186       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:43.743280       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0819 19:02:43.751731       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0819 19:02:43.755112       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 19:02:43.758721       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 19:02:43.758852       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 19:02:43.759065       1 instance.go:232] Using reconciler: lease
	W0819 19:02:43.760135       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:44.743832       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:44.743963       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:44.761569       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:46.185098       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:46.207828       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:46.382784       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:48.822814       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:48.974921       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:49.351895       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:53.115051       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:53.237838       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:54.161281       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:59.099479       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:02:59.492816       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:03:01.794931       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0819 19:03:03.759664       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9] <==
	I0819 19:02:44.745523       1 serving.go:386] Generated self-signed cert in-memory
	I0819 19:02:44.990908       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 19:02:44.990991       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:02:44.992289       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 19:02:44.992410       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 19:02:44.992616       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 19:02:44.992692       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0819 19:03:04.995138       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.22:8441/healthz\": dial tcp 192.168.39.22:8441: connect: connection refused"
	
	
	==> kube-scheduler [e3ddc8f73f9e6d600c3eaaeb35ee3e69b91e761090350380d0877f26091591ca] <==
	E0819 19:03:29.616172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.22:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:32.074182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.22:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:32.074280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.22:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:34.545647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.22:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:34.545700       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.22:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:46.993572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.22:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:46.993633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.22:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:49.036950       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.22:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:49.037018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.22:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:49.224105       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.22:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:49.224150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.22:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:50.723059       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.22:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:50.723109       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.22:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:51.629827       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.22:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:51.629910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.22:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:56.598327       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.22:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:56.598382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.22:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:03:57.267271       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.22:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:03:57.267349       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.22:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:04:00.473828       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.22:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:04:00.473873       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.22:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:04:00.878775       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.22:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:04:00.878819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.22:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	W0819 19:04:02.643610       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.22:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	E0819 19:04:02.643667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.22:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Aug 19 19:03:48 functional-124593 kubelet[10109]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.385433   10109 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094228385106955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.385459   10109 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094228385106955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.752200   10109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/default/events\": dial tcp 192.168.39.22:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-124593.17ed3659059f9af6  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-124593,UID:functional-124593,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-124593 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-124593,},FirstTimestamp:2024-08-19 18:59:48.327103222 +0000 UTC m=+0.349325093,LastTimestamp:2024-08-19 18:59:48.327103222 +0000 UTC m=+0.349325093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:fu
nctional-124593,}"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: I0819 19:03:48.956588   10109 kubelet_node_status.go:72] "Attempting to register node" node="functional-124593"
	Aug 19 19:03:48 functional-124593 kubelet[10109]: E0819 19:03:48.957346   10109 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8441/api/v1/nodes\": dial tcp 192.168.39.22:8441: connect: connection refused" node="functional-124593"
	Aug 19 19:03:52 functional-124593 kubelet[10109]: W0819 19:03:52.762074   10109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-124593&limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	Aug 19 19:03:52 functional-124593 kubelet[10109]: E0819 19:03:52.762188   10109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-124593&limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	Aug 19 19:03:53 functional-124593 kubelet[10109]: I0819 19:03:53.319902   10109 scope.go:117] "RemoveContainer" containerID="effebbec1cbf2d286a85d488157a567b5bfbe531a0d565735bc11cfd0743341d"
	Aug 19 19:03:53 functional-124593 kubelet[10109]: E0819 19:03:53.320038   10109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-124593_kube-system(15de45e6effb382c12ca8494f33bff76)\"" pod="kube-system/kube-apiserver-functional-124593" podUID="15de45e6effb382c12ca8494f33bff76"
	Aug 19 19:03:53 functional-124593 kubelet[10109]: E0819 19:03:53.777098   10109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-124593?timeout=10s\": dial tcp 192.168.39.22:8441: connect: connection refused" interval="7s"
	Aug 19 19:03:55 functional-124593 kubelet[10109]: I0819 19:03:55.958779   10109 kubelet_node_status.go:72] "Attempting to register node" node="functional-124593"
	Aug 19 19:03:55 functional-124593 kubelet[10109]: E0819 19:03:55.959704   10109 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8441/api/v1/nodes\": dial tcp 192.168.39.22:8441: connect: connection refused" node="functional-124593"
	Aug 19 19:03:56 functional-124593 kubelet[10109]: E0819 19:03:56.914816   10109 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://control-plane.minikube.internal:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	Aug 19 19:03:58 functional-124593 kubelet[10109]: E0819 19:03:58.326664   10109 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_etcd_etcd-functional-124593_kube-system_1d81c5d63cba07001a82e239314e39e2_1\" is already in use by 847f5422f7736630cbb85c950b0ce7fee365173fd629c7c35c4baca6131ab56d. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="fb4264c69f603d50f969b7ac2f0dad4c593bae0da887198d6e0d16aab460b73b"
	Aug 19 19:03:58 functional-124593 kubelet[10109]: E0819 19:03:58.326817   10109 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:etcd,Image:registry.k8s.io/etcd:3.5.15-0,Command:[etcd --advertise-client-urls=https://192.168.39.22:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/minikube/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://192.168.39.22:2380 --initial-cluster=functional-124593=https://192.168.39.22:2380 --key-file=/var/lib/minikube/certs/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.39.22:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.39.22:2380 --name=functional-124593 --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/var/lib/minikube/certs/etcd/peer.key --peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.
crt --proxy-refresh-interval=70000 --snapshot-count=10000 --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{104857600 0} {<nil>} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etcd-data,ReadOnly:false,MountPath:/var/lib/minikube/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-certs,ReadOnly:false,MountPath:/var/lib/minikube/certs/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{P
robeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-functional-124593_kube-system(1d81c5d63cba07001a82e239314e39e2): CreateContainerError: the c
ontainer name \"k8s_etcd_etcd-functional-124593_kube-system_1d81c5d63cba07001a82e239314e39e2_1\" is already in use by 847f5422f7736630cbb85c950b0ce7fee365173fd629c7c35c4baca6131ab56d. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Aug 19 19:03:58 functional-124593 kubelet[10109]: E0819 19:03:58.328076   10109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-functional-124593_kube-system_1d81c5d63cba07001a82e239314e39e2_1\\\" is already in use by 847f5422f7736630cbb85c950b0ce7fee365173fd629c7c35c4baca6131ab56d. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-functional-124593" podUID="1d81c5d63cba07001a82e239314e39e2"
	Aug 19 19:03:58 functional-124593 kubelet[10109]: E0819 19:03:58.387646   10109 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094238387370954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:03:58 functional-124593 kubelet[10109]: E0819 19:03:58.387683   10109 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094238387370954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:03:58 functional-124593 kubelet[10109]: E0819 19:03:58.754140   10109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/default/events\": dial tcp 192.168.39.22:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-124593.17ed3659059f9af6  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-124593,UID:functional-124593,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-124593 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-124593,},FirstTimestamp:2024-08-19 18:59:48.327103222 +0000 UTC m=+0.349325093,LastTimestamp:2024-08-19 18:59:48.327103222 +0000 UTC m=+0.349325093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:fu
nctional-124593,}"
	Aug 19 19:03:59 functional-124593 kubelet[10109]: I0819 19:03:59.319810   10109 scope.go:117] "RemoveContainer" containerID="e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9"
	Aug 19 19:03:59 functional-124593 kubelet[10109]: E0819 19:03:59.319967   10109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-124593_kube-system(c71ff42fdd5902541920b0f91ca1cbbc)\"" pod="kube-system/kube-controller-manager-functional-124593" podUID="c71ff42fdd5902541920b0f91ca1cbbc"
	Aug 19 19:04:00 functional-124593 kubelet[10109]: E0819 19:04:00.778272   10109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-124593?timeout=10s\": dial tcp 192.168.39.22:8441: connect: connection refused" interval="7s"
	Aug 19 19:04:02 functional-124593 kubelet[10109]: W0819 19:04:02.697172   10109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8441: connect: connection refused
	Aug 19 19:04:02 functional-124593 kubelet[10109]: E0819 19:04:02.697279   10109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.22:8441: connect: connection refused" logger="UnhandledError"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:04:02.051726  448449 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-430949/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-124593 -n functional-124593
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-124593 -n functional-124593: exit status 2 (221.93984ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-124593" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (3.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-124593 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'beta.kubernetes.io/arch beta.kubernetes.io/os kubernetes.io/arch kubernetes.io/hostname kubernetes.io/os '

                                                
                                                
-- /stdout --
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'beta.kubernetes.io/arch beta.kubernetes.io/os kubernetes.io/arch kubernetes.io/hostname kubernetes.io/os '

                                                
                                                
-- /stdout --
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'beta.kubernetes.io/arch beta.kubernetes.io/os kubernetes.io/arch kubernetes.io/hostname kubernetes.io/os '

                                                
                                                
-- /stdout --
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'beta.kubernetes.io/arch beta.kubernetes.io/os kubernetes.io/arch kubernetes.io/hostname kubernetes.io/os '

                                                
                                                
-- /stdout --
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'beta.kubernetes.io/arch beta.kubernetes.io/os kubernetes.io/arch kubernetes.io/hostname kubernetes.io/os '

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-124593 -n functional-124593
helpers_test.go:244: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-124593 logs -n 25: (2.689680076s)
helpers_test.go:252: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| start     | -p functional-124593                                                     | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|           | --dry-run --alsologtostderr                                              |                   |         |         |                     |                     |
	|           | -v=1 --driver=kvm2                                                       |                   |         |         |                     |                     |
	|           | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                       | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|           | -p functional-124593                                                     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh       | functional-124593 ssh sudo                                               | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|           | systemctl is-active docker                                               |                   |         |         |                     |                     |
	| ssh       | functional-124593 ssh sudo                                               | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|           | systemctl is-active containerd                                           |                   |         |         |                     |                     |
	| image     | functional-124593 image load --daemon                                    | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|           | kicbase/echo-server:functional-124593                                    |                   |         |         |                     |                     |
	|           | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh       | functional-124593 ssh stat                                               | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|           | /mount-9p/created-by-test                                                |                   |         |         |                     |                     |
	| ssh       | functional-124593 ssh stat                                               | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|           | /mount-9p/created-by-pod                                                 |                   |         |         |                     |                     |
	| ssh       | functional-124593 ssh sudo                                               | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|           | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| image     | functional-124593 image ls                                               | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	| ssh       | functional-124593 ssh findmnt                                            | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| mount     | -p functional-124593                                                     | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdspecific-port1424652350/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| image     | functional-124593 image load --daemon                                    | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|           | kicbase/echo-server:functional-124593                                    |                   |         |         |                     |                     |
	|           | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh       | functional-124593 ssh findmnt                                            | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| image     | functional-124593 image ls                                               | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	| ssh       | functional-124593 ssh -- ls                                              | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|           | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh       | functional-124593 ssh sudo                                               | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:05 UTC |
	|           | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| image     | functional-124593 image load --daemon                                    | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:05 UTC |                     |
	|           | kicbase/echo-server:functional-124593                                    |                   |         |         |                     |                     |
	|           | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| mount     | -p functional-124593                                                     | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:05 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1426555090/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount     | -p functional-124593                                                     | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:05 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1426555090/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh       | functional-124593 ssh findmnt                                            | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:05 UTC |                     |
	|           | -T /mount1                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-124593                                                     | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:05 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1426555090/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh       | functional-124593 ssh findmnt                                            | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:05 UTC | 19 Aug 24 19:05 UTC |
	|           | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh       | functional-124593 ssh findmnt                                            | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:05 UTC | 19 Aug 24 19:05 UTC |
	|           | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh       | functional-124593 ssh findmnt                                            | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:05 UTC | 19 Aug 24 19:05 UTC |
	|           | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-124593                                                     | functional-124593 | jenkins | v1.33.1 | 19 Aug 24 19:05 UTC |                     |
	|           | --kill=true                                                              |                   |         |         |                     |                     |
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:04:54
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:04:54.771430  449935 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:04:54.771557  449935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:04:54.771567  449935 out.go:358] Setting ErrFile to fd 2...
	I0819 19:04:54.771571  449935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:04:54.771740  449935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:04:54.772286  449935 out.go:352] Setting JSON to false
	I0819 19:04:54.773287  449935 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10046,"bootTime":1724084249,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:04:54.773355  449935 start.go:139] virtualization: kvm guest
	I0819 19:04:54.775443  449935 out.go:177] * [functional-124593] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:04:54.776733  449935 notify.go:220] Checking for updates...
	I0819 19:04:54.776803  449935 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 19:04:54.778367  449935 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:04:54.779708  449935 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:04:54.780864  449935 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:04:54.782102  449935 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:04:54.783258  449935 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:04:54.784894  449935 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:04:54.785326  449935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:04:54.785410  449935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:04:54.801373  449935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44465
	I0819 19:04:54.801853  449935 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:04:54.802393  449935 main.go:141] libmachine: Using API Version  1
	I0819 19:04:54.802414  449935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:04:54.802776  449935 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:04:54.802949  449935 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 19:04:54.803180  449935 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 19:04:54.803472  449935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:04:54.803514  449935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:04:54.819245  449935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0819 19:04:54.819797  449935 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:04:54.820268  449935 main.go:141] libmachine: Using API Version  1
	I0819 19:04:54.820288  449935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:04:54.820651  449935 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:04:54.820861  449935 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 19:04:54.861252  449935 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 19:04:54.862802  449935 start.go:297] selected driver: kvm2
	I0819 19:04:54.862846  449935 start.go:901] validating driver "kvm2" against &{Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:04:54.862970  449935 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:04:54.864082  449935 cni.go:84] Creating CNI manager for ""
	I0819 19:04:54.864099  449935 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:04:54.864138  449935 start.go:340] cluster config:
	{Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-124593 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:04:54.866008  449935 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.756353113Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dbe2b0b3-e1e4-48f0-9776-0f7a2a66aaa2 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.757310266Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=edc8d1a9-ce6d-492e-b337-e340b9d73b66 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.758045911Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094302758022123,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:244936,},InodesUsed:&UInt64Value{Value:101,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=edc8d1a9-ce6d-492e-b337-e340b9d73b66 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.758686832Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=792780bd-6c37-49a9-a796-870f4342447c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.758796761Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=792780bd-6c37-49a9-a796-870f4342447c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.759259906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:759c34d5180bee46c09515939458cc9ad0bf564dc123f4d1f669e1bed0c46057,PodSandboxId:fd9df0fecbf26dc6e1a2a32343a647d410abf73cdfbb89f3d49132fecd48348d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1724094295133038945,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 23f3f036-ffe3-4d51-982f-d0ad3229b7f6,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3552b8b8a58e188db056bf32bdb200367014a4315ef28fc3152195c663fa63c5,PodSandboxId:717d587d66e8ca1dfc091f9131280a3e5bde532f23a9bdc7f12d64f43e3f6232,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c,State:CONTAINER_EXITED,CreatedAt:1724094293190821081,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b91548ea-a975-4e04-8fd6-5d7432f47df9,},Annotations:map[string]string{io.kubernetes.container.hash: 18495a64,io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067feceb65add120b32f5b53f2d613f3831d3a8dcc0e813382c393d2e0228de5,PodSandboxId:0a3337fc6632f1970d3eb665eb0a50c4a6710c33ab416799db8d239d9ae3c8ac,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1724094286280862400,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-qc986,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddd55849-ad70-4d7f-be62-906aaf700473,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03af676235e031c10f688c46fd57ecb358c20b75b53dcf44e8436d0e9e42219b,PodSandboxId:8d62286b596ae7ae3bb45da725ac24e979b16240b3d5a407d984506dd1808e96,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1724094286198419107,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-87rkf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12da4ea6-f37f-4870-8cb4-180985d63872,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378a8b2ff7839642c07be5f716d416df44be569960a8b842c4024ccbdb8810ee,PodSandboxId:14b835d2d3cc49c9e8b143d3c46547d25ba5f0cdfa757da9991d70b70c8a0914,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724094283181048007,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx-svc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e23cea6e-b5eb-44d1-a60a-0589faee104e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protoc
ol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a899dd7dca094e152117eadd14bb91074c9c4aa534330047b64090ddad9fa6c,PodSandboxId:d5aadb9b533271e0a357e7520d014ed5942230b385828114b9c3248206e88431,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094268749385859,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c390b22-3b82-4f13-8bd2-883299635128,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf9
61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addc2be73611d1820ae0fae725f50f7cd70b202166835dd71d2aa96b2c29c119,PodSandboxId:884f14efe28ff7762dd1d895f1cac862fe86a4f47d4b5087267d0a10c4a57186,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094261513265010,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-52mzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64e13bd0-64ee-43c5-9bf7-0f8a1490ccaf,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.port
s: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c87dfccf44c0a9ad08117a979954b7e83794f89753437edb3fd927a8296db1,PodSandboxId:215a00db7c461089938fc0a76c38e700a413626e61154d2d5595da42036ca911,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094261453160678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x9tpk,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff3163a-9790-4873-862c-ee56003d6dbe,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbcd3c117d8bdf5b50083ec0e9c7fa5e2677bda544ee54c22a5d8923678c3dbf,PodSandboxId:24d9bc707d8e69aeab0177f45962c09b3f02f9855b599a9cb4a63e673a778c41,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf
049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094261115152837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kmnjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb61258-2c36-4fad-9aef-e65082f0de2b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6476e5a22ef93b76c0b8c58a141675e989088f3d9f4401e04b70a2f61b3406f,PodSandboxId:214dd80386fccbd1d44e0e6b6f3eac5e607dfcff59c463a4bb0d06e70da50636,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Sta
te:CONTAINER_RUNNING,CreatedAt:1724094249957711039,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c5ddbfa63eba68c95c71e04faedf04f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6aa29eb1529a85268d2dc5ce08077e347ca5b605f24e5eb0744327e929fa7b,PodSandboxId:27cda86b040703226e941720bcacbecfd376ae746dcf345d04c7fbc2527b5913,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,Cr
eatedAt:1724094249724618953,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d81c5d63cba07001a82e239314e39e2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16698665309e7da4c9b8540c92ca701d0b76719aef9f29d0bb14f2c7bcdbe6dc,PodSandboxId:c9190396cb93fda2c83b800dea9a40e33b984c4e39122fed0038598dbf28ca2d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:6,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094249720901629,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:159f792fcd37ab7e1963c24a9200fdbd432900fc07886593c5806b2259146463,PodSandboxId:b7a0dd2fb52fc2a8e5ef493c7375fbf299ac57f940a4b811c4a36bdc9e667ae8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:16,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094249732261616,Labels:m
ap[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850dfb841735db872ffa11ffc3c51b30121a8ae31433e69da31971d06c5940f4,PodSandboxId:27cda86b040703226e941720bcacbecfd376ae746dcf345d04c7fbc2527b5913,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724094246503274329,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d81c5d63cba07001a82e239314e39e2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d8d75cdcbf68e1780941055e907547ce2068e258c35d479e7553dff289bee3,PodSandboxId:de331608570cf7c9e5022ffca3b8037574fc16a7a613f6545dd6f557609be181,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:5,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724094245442698868,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329234273,Labels:map[string]string{io.kubernetes.container.na
me: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=792780bd-6c37-49a9-a796-870f4342447c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.833099578Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=663ae086-8811-4488-a342-2cb5ae557d3c name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.834035720Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b362e3b63704bd1490837eb4292e757a15757470e51275a8dd8e01fa6f611820,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-c5db448b4-9plsc,Uid:aa85bac0-2fbf-4fbb-93f2-1294716658b8,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724094296504734346,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-9plsc,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: aa85bac0-2fbf-4fbb-93f2-1294716658b8,k8s-app: dashboard-metrics-scraper,pod-template-hash: c5db448b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T19:04:56.174121864Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:41f3b358ab95b6daa9e2ac6da4564df86665d2cf4a114d2eeebb3b
305710bf2a,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-695b96c756-n2fqj,Uid:c8a3606c-aef1-4f31-95d8-c58d2914fe94,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724094296468350398,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-n2fqj,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: c8a3606c-aef1-4f31-95d8-c58d2914fe94,k8s-app: kubernetes-dashboard,pod-template-hash: 695b96c756,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T19:04:56.160935179Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fd9df0fecbf26dc6e1a2a32343a647d410abf73cdfbb89f3d49132fecd48348d,Metadata:&PodSandboxMetadata{Name:busybox-mount,Uid:23f3f036-ffe3-4d51-982f-d0ad3229b7f6,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724094293031823103,Labels:map[string]string{integration-test: busybox-mount,io.kubernetes.container.name: POD,io.
kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 23f3f036-ffe3-4d51-982f-d0ad3229b7f6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T19:04:52.262895820Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:717d587d66e8ca1dfc091f9131280a3e5bde532f23a9bdc7f12d64f43e3f6232,Metadata:&PodSandboxMetadata{Name:sp-pod,Uid:b91548ea-a975-4e04-8fd6-5d7432f47df9,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724094287269599619,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b91548ea-a975-4e04-8fd6-5d7432f47df9,test: storage-provisioner,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"test\":\"storage-provisioner\"},\"name\":\"sp-pod\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"dock
er.io/nginx\",\"name\":\"myfrontend\",\"volumeMounts\":[{\"mountPath\":\"/tmp/mount\",\"name\":\"mypd\"}]}],\"volumes\":[{\"name\":\"mypd\",\"persistentVolumeClaim\":{\"claimName\":\"myclaim\"}}]}}\n,kubernetes.io/config.seen: 2024-08-19T19:04:46.962731078Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0a3337fc6632f1970d3eb665eb0a50c4a6710c33ab416799db8d239d9ae3c8ac,Metadata:&PodSandboxMetadata{Name:hello-node-6b9f76b5c7-qc986,Uid:ddd55849-ad70-4d7f-be62-906aaf700473,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724094280228253441,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-6b9f76b5c7-qc986,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddd55849-ad70-4d7f-be62-906aaf700473,pod-template-hash: 6b9f76b5c7,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T19:04:39.921949682Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8d62286b596ae7ae3bb45da725ac24e979b162
40b3d5a407d984506dd1808e96,Metadata:&PodSandboxMetadata{Name:hello-node-connect-67bdd5bbb4-87rkf,Uid:12da4ea6-f37f-4870-8cb4-180985d63872,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724094279636641327,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-87rkf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12da4ea6-f37f-4870-8cb4-180985d63872,pod-template-hash: 67bdd5bbb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T19:04:39.328060247Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:14b835d2d3cc49c9e8b143d3c46547d25ba5f0cdfa757da9991d70b70c8a0914,Metadata:&PodSandboxMetadata{Name:nginx-svc,Uid:e23cea6e-b5eb-44d1-a60a-0589faee104e,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724094279488805319,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx-svc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.
uid: e23cea6e-b5eb-44d1-a60a-0589faee104e,run: nginx-svc,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"run\":\"nginx-svc\"},\"name\":\"nginx-svc\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"docker.io/nginx:alpine\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80,\"protocol\":\"TCP\"}]}]}}\n,kubernetes.io/config.seen: 2024-08-19T19:04:39.177615347Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c571058b2819c047a0f01afd0a50ff6b66d640094e165158fce8450d661d5962,Metadata:&PodSandboxMetadata{Name:invalid-svc,Uid:0ce678bd-26a9-4943-b32e-f923a0e97743,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724094275066347870,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: invalid-svc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0ce678bd-26a9-4943-b32e-f923a0e97743,run: invalid-svc,},Annotation
s:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"run\":\"invalid-svc\"},\"name\":\"invalid-svc\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"nonexistingimage:latest\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80,\"protocol\":\"TCP\"}]}]}}\n,kubernetes.io/config.seen: 2024-08-19T19:04:33.259703242Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d5aadb9b533271e0a357e7520d014ed5942230b385828114b9c3248206e88431,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6c390b22-3b82-4f13-8bd2-883299635128,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724094268658579086,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c390b22-3b82-4f13-8bd2-8832
99635128,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-19T19:04:28.352186654Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:884f14efe28ff7762dd1d895f1cac862fe86a4f47d4b5087267d0a10c4a57186,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-52mzn,Uid:64e13bd0-64ee-43c
5-9bf7-0f8a1490ccaf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724094261155932108,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-52mzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64e13bd0-64ee-43c5-9bf7-0f8a1490ccaf,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T19:04:20.840199665Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:215a00db7c461089938fc0a76c38e700a413626e61154d2d5595da42036ca911,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-x9tpk,Uid:0ff3163a-9790-4873-862c-ee56003d6dbe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724094261112269629,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-x9tpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff3163a-9790-4873-862c-ee56003d6dbe,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f
,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T19:04:20.806532110Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:24d9bc707d8e69aeab0177f45962c09b3f02f9855b599a9cb4a63e673a778c41,Metadata:&PodSandboxMetadata{Name:kube-proxy-kmnjp,Uid:ccb61258-2c36-4fad-9aef-e65082f0de2b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724094261019791067,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kmnjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb61258-2c36-4fad-9aef-e65082f0de2b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T19:04:20.713305756Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:214dd80386fccbd1d44e0e6b6f3eac5e607dfcff59c463a4bb0d06e70da50636,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-124593,Uid:2c5ddbfa63eba68c95c71e04faedf04f,Namespace:ku
be-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724094249702110390,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c5ddbfa63eba68c95c71e04faedf04f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.22:8441,kubernetes.io/config.hash: 2c5ddbfa63eba68c95c71e04faedf04f,kubernetes.io/config.seen: 2024-08-19T19:04:09.221769322Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5eaaa05459739163b2b877b70c38e84091ef36a4a4c170018983551742255ebf,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-124593,Uid:15de45e6effb382c12ca8494f33bff76,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1724094246279982887,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional
-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15de45e6effb382c12ca8494f33bff76,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.22:8441,kubernetes.io/config.hash: 15de45e6effb382c12ca8494f33bff76,kubernetes.io/config.seen: 2024-08-19T18:59:48.283016207Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:27cda86b040703226e941720bcacbecfd376ae746dcf345d04c7fbc2527b5913,Metadata:&PodSandboxMetadata{Name:etcd-functional-124593,Uid:1d81c5d63cba07001a82e239314e39e2,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724094246276013838,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d81c5d63cba07001a82e239314e39e2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.22:2379,kuber
netes.io/config.hash: 1d81c5d63cba07001a82e239314e39e2,kubernetes.io/config.seen: 2024-08-19T18:59:48.283012652Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c9190396cb93fda2c83b800dea9a40e33b984c4e39122fed0038598dbf28ca2d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-124593,Uid:b53e73ff89e97c5f981e8291a8f62ab6,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724094246269469516,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b53e73ff89e97c5f981e8291a8f62ab6,kubernetes.io/config.seen: 2024-08-19T18:59:48.283018086Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b7a0dd2fb52fc2a8e5ef493c7375fbf299ac57f940a4b811c4a36bdc9e667ae8,Metadata:&PodSandboxMetadata{Name:kube-control
ler-manager-functional-124593,Uid:c71ff42fdd5902541920b0f91ca1cbbc,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724094246258632717,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c71ff42fdd5902541920b0f91ca1cbbc,kubernetes.io/config.seen: 2024-08-19T18:59:48.283017234Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:de331608570cf7c9e5022ffca3b8037574fc16a7a613f6545dd6f557609be181,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-124593,Uid:b53e73ff89e97c5f981e8291a8f62ab6,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724094245191857652,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-schedul
er-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b53e73ff89e97c5f981e8291a8f62ab6,kubernetes.io/config.seen: 2024-08-19T18:59:48.283018086Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:59013506b9174ea339d22d4e1595ce75d5914ac9c880e68a9b739018f586ec2d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-124593,Uid:15de45e6effb382c12ca8494f33bff76,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724093988754907620,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15de45e6effb382c12ca8494f33bff76,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.22:8441,kubernetes.io/config.hash: 15de45e6e
ffb382c12ca8494f33bff76,kubernetes.io/config.seen: 2024-08-19T18:59:48.283016207Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-124593,Uid:c71ff42fdd5902541920b0f91ca1cbbc,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724093988742033269,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c71ff42fdd5902541920b0f91ca1cbbc,kubernetes.io/config.seen: 2024-08-19T18:59:48.283017234Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=663ae086-8811-4488-a342-2cb5ae557d3c name=/runtime.v1.RuntimeService/ListPodSa
ndbox
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.835063331Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b3a599d-95f7-4853-bd27-16c0032ba3f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.835128758Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b3a599d-95f7-4853-bd27-16c0032ba3f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.835550948Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:759c34d5180bee46c09515939458cc9ad0bf564dc123f4d1f669e1bed0c46057,PodSandboxId:fd9df0fecbf26dc6e1a2a32343a647d410abf73cdfbb89f3d49132fecd48348d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1724094295133038945,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 23f3f036-ffe3-4d51-982f-d0ad3229b7f6,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3552b8b8a58e188db056bf32bdb200367014a4315ef28fc3152195c663fa63c5,PodSandboxId:717d587d66e8ca1dfc091f9131280a3e5bde532f23a9bdc7f12d64f43e3f6232,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c,State:CONTAINER_EXITED,CreatedAt:1724094293190821081,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b91548ea-a975-4e04-8fd6-5d7432f47df9,},Annotations:map[string]string{io.kubernetes.container.hash: 18495a64,io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067feceb65add120b32f5b53f2d613f3831d3a8dcc0e813382c393d2e0228de5,PodSandboxId:0a3337fc6632f1970d3eb665eb0a50c4a6710c33ab416799db8d239d9ae3c8ac,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1724094286280862400,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-qc986,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddd55849-ad70-4d7f-be62-906aaf700473,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03af676235e031c10f688c46fd57ecb358c20b75b53dcf44e8436d0e9e42219b,PodSandboxId:8d62286b596ae7ae3bb45da725ac24e979b16240b3d5a407d984506dd1808e96,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1724094286198419107,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-87rkf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12da4ea6-f37f-4870-8cb4-180985d63872,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378a8b2ff7839642c07be5f716d416df44be569960a8b842c4024ccbdb8810ee,PodSandboxId:14b835d2d3cc49c9e8b143d3c46547d25ba5f0cdfa757da9991d70b70c8a0914,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724094283181048007,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx-svc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e23cea6e-b5eb-44d1-a60a-0589faee104e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protoc
ol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a899dd7dca094e152117eadd14bb91074c9c4aa534330047b64090ddad9fa6c,PodSandboxId:d5aadb9b533271e0a357e7520d014ed5942230b385828114b9c3248206e88431,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094268749385859,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c390b22-3b82-4f13-8bd2-883299635128,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf9
61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addc2be73611d1820ae0fae725f50f7cd70b202166835dd71d2aa96b2c29c119,PodSandboxId:884f14efe28ff7762dd1d895f1cac862fe86a4f47d4b5087267d0a10c4a57186,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094261513265010,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-52mzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64e13bd0-64ee-43c5-9bf7-0f8a1490ccaf,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.port
s: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c87dfccf44c0a9ad08117a979954b7e83794f89753437edb3fd927a8296db1,PodSandboxId:215a00db7c461089938fc0a76c38e700a413626e61154d2d5595da42036ca911,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094261453160678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x9tpk,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff3163a-9790-4873-862c-ee56003d6dbe,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbcd3c117d8bdf5b50083ec0e9c7fa5e2677bda544ee54c22a5d8923678c3dbf,PodSandboxId:24d9bc707d8e69aeab0177f45962c09b3f02f9855b599a9cb4a63e673a778c41,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf
049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094261115152837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kmnjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb61258-2c36-4fad-9aef-e65082f0de2b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6476e5a22ef93b76c0b8c58a141675e989088f3d9f4401e04b70a2f61b3406f,PodSandboxId:214dd80386fccbd1d44e0e6b6f3eac5e607dfcff59c463a4bb0d06e70da50636,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Sta
te:CONTAINER_RUNNING,CreatedAt:1724094249957711039,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c5ddbfa63eba68c95c71e04faedf04f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6aa29eb1529a85268d2dc5ce08077e347ca5b605f24e5eb0744327e929fa7b,PodSandboxId:27cda86b040703226e941720bcacbecfd376ae746dcf345d04c7fbc2527b5913,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,Cr
eatedAt:1724094249724618953,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d81c5d63cba07001a82e239314e39e2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16698665309e7da4c9b8540c92ca701d0b76719aef9f29d0bb14f2c7bcdbe6dc,PodSandboxId:c9190396cb93fda2c83b800dea9a40e33b984c4e39122fed0038598dbf28ca2d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:6,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094249720901629,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:159f792fcd37ab7e1963c24a9200fdbd432900fc07886593c5806b2259146463,PodSandboxId:b7a0dd2fb52fc2a8e5ef493c7375fbf299ac57f940a4b811c4a36bdc9e667ae8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:16,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094249732261616,Labels:m
ap[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850dfb841735db872ffa11ffc3c51b30121a8ae31433e69da31971d06c5940f4,PodSandboxId:27cda86b040703226e941720bcacbecfd376ae746dcf345d04c7fbc2527b5913,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724094246503274329,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d81c5d63cba07001a82e239314e39e2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d8d75cdcbf68e1780941055e907547ce2068e258c35d479e7553dff289bee3,PodSandboxId:de331608570cf7c9e5022ffca3b8037574fc16a7a613f6545dd6f557609be181,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:5,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724094245442698868,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329234273,Labels:map[string]string{io.kubernetes.container.na
me: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b3a599d-95f7-4853-bd27-16c0032ba3f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.866842123Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=684b8ac3-4f49-4ad9-be08-83f09bfaab75 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.866933366Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=684b8ac3-4f49-4ad9-be08-83f09bfaab75 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.868374751Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24255308-b6b2-473c-bd1d-e35a8e8ec742 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.869049687Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094302869021808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:244936,},InodesUsed:&UInt64Value{Value:101,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24255308-b6b2-473c-bd1d-e35a8e8ec742 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.870989859Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9277ebf4-c168-452d-96e4-1db4bad838d2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.871073168Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9277ebf4-c168-452d-96e4-1db4bad838d2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.872101250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:759c34d5180bee46c09515939458cc9ad0bf564dc123f4d1f669e1bed0c46057,PodSandboxId:fd9df0fecbf26dc6e1a2a32343a647d410abf73cdfbb89f3d49132fecd48348d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1724094295133038945,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 23f3f036-ffe3-4d51-982f-d0ad3229b7f6,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3552b8b8a58e188db056bf32bdb200367014a4315ef28fc3152195c663fa63c5,PodSandboxId:717d587d66e8ca1dfc091f9131280a3e5bde532f23a9bdc7f12d64f43e3f6232,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c,State:CONTAINER_EXITED,CreatedAt:1724094293190821081,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b91548ea-a975-4e04-8fd6-5d7432f47df9,},Annotations:map[string]string{io.kubernetes.container.hash: 18495a64,io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067feceb65add120b32f5b53f2d613f3831d3a8dcc0e813382c393d2e0228de5,PodSandboxId:0a3337fc6632f1970d3eb665eb0a50c4a6710c33ab416799db8d239d9ae3c8ac,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1724094286280862400,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-qc986,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddd55849-ad70-4d7f-be62-906aaf700473,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03af676235e031c10f688c46fd57ecb358c20b75b53dcf44e8436d0e9e42219b,PodSandboxId:8d62286b596ae7ae3bb45da725ac24e979b16240b3d5a407d984506dd1808e96,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1724094286198419107,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-87rkf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12da4ea6-f37f-4870-8cb4-180985d63872,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378a8b2ff7839642c07be5f716d416df44be569960a8b842c4024ccbdb8810ee,PodSandboxId:14b835d2d3cc49c9e8b143d3c46547d25ba5f0cdfa757da9991d70b70c8a0914,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724094283181048007,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx-svc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e23cea6e-b5eb-44d1-a60a-0589faee104e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protoc
ol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a899dd7dca094e152117eadd14bb91074c9c4aa534330047b64090ddad9fa6c,PodSandboxId:d5aadb9b533271e0a357e7520d014ed5942230b385828114b9c3248206e88431,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094268749385859,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c390b22-3b82-4f13-8bd2-883299635128,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf9
61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addc2be73611d1820ae0fae725f50f7cd70b202166835dd71d2aa96b2c29c119,PodSandboxId:884f14efe28ff7762dd1d895f1cac862fe86a4f47d4b5087267d0a10c4a57186,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094261513265010,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-52mzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64e13bd0-64ee-43c5-9bf7-0f8a1490ccaf,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.port
s: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c87dfccf44c0a9ad08117a979954b7e83794f89753437edb3fd927a8296db1,PodSandboxId:215a00db7c461089938fc0a76c38e700a413626e61154d2d5595da42036ca911,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094261453160678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x9tpk,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff3163a-9790-4873-862c-ee56003d6dbe,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbcd3c117d8bdf5b50083ec0e9c7fa5e2677bda544ee54c22a5d8923678c3dbf,PodSandboxId:24d9bc707d8e69aeab0177f45962c09b3f02f9855b599a9cb4a63e673a778c41,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf
049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094261115152837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kmnjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb61258-2c36-4fad-9aef-e65082f0de2b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6476e5a22ef93b76c0b8c58a141675e989088f3d9f4401e04b70a2f61b3406f,PodSandboxId:214dd80386fccbd1d44e0e6b6f3eac5e607dfcff59c463a4bb0d06e70da50636,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Sta
te:CONTAINER_RUNNING,CreatedAt:1724094249957711039,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c5ddbfa63eba68c95c71e04faedf04f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6aa29eb1529a85268d2dc5ce08077e347ca5b605f24e5eb0744327e929fa7b,PodSandboxId:27cda86b040703226e941720bcacbecfd376ae746dcf345d04c7fbc2527b5913,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,Cr
eatedAt:1724094249724618953,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d81c5d63cba07001a82e239314e39e2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16698665309e7da4c9b8540c92ca701d0b76719aef9f29d0bb14f2c7bcdbe6dc,PodSandboxId:c9190396cb93fda2c83b800dea9a40e33b984c4e39122fed0038598dbf28ca2d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:6,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094249720901629,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:159f792fcd37ab7e1963c24a9200fdbd432900fc07886593c5806b2259146463,PodSandboxId:b7a0dd2fb52fc2a8e5ef493c7375fbf299ac57f940a4b811c4a36bdc9e667ae8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:16,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094249732261616,Labels:m
ap[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850dfb841735db872ffa11ffc3c51b30121a8ae31433e69da31971d06c5940f4,PodSandboxId:27cda86b040703226e941720bcacbecfd376ae746dcf345d04c7fbc2527b5913,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724094246503274329,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d81c5d63cba07001a82e239314e39e2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d8d75cdcbf68e1780941055e907547ce2068e258c35d479e7553dff289bee3,PodSandboxId:de331608570cf7c9e5022ffca3b8037574fc16a7a613f6545dd6f557609be181,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:5,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724094245442698868,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329234273,Labels:map[string]string{io.kubernetes.container.na
me: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9277ebf4-c168-452d-96e4-1db4bad838d2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.950855160Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8a52af4-4acd-4101-9560-8e0a2a8baf7b name=/runtime.v1.RuntimeService/Version
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.950955927Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8a52af4-4acd-4101-9560-8e0a2a8baf7b name=/runtime.v1.RuntimeService/Version
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.952907715Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f34f3b66-3d3e-4461-9a20-639e1a25a1b1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.953603341Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094302953577441,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:244936,},InodesUsed:&UInt64Value{Value:101,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f34f3b66-3d3e-4461-9a20-639e1a25a1b1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.954425372Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad665059-6e6f-4dae-98a2-58c6728c45af name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.954575465Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad665059-6e6f-4dae-98a2-58c6728c45af name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:05:02 functional-124593 crio[12355]: time="2024-08-19 19:05:02.954999524Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:759c34d5180bee46c09515939458cc9ad0bf564dc123f4d1f669e1bed0c46057,PodSandboxId:fd9df0fecbf26dc6e1a2a32343a647d410abf73cdfbb89f3d49132fecd48348d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1724094295133038945,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 23f3f036-ffe3-4d51-982f-d0ad3229b7f6,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3552b8b8a58e188db056bf32bdb200367014a4315ef28fc3152195c663fa63c5,PodSandboxId:717d587d66e8ca1dfc091f9131280a3e5bde532f23a9bdc7f12d64f43e3f6232,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c,State:CONTAINER_EXITED,CreatedAt:1724094293190821081,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b91548ea-a975-4e04-8fd6-5d7432f47df9,},Annotations:map[string]string{io.kubernetes.container.hash: 18495a64,io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067feceb65add120b32f5b53f2d613f3831d3a8dcc0e813382c393d2e0228de5,PodSandboxId:0a3337fc6632f1970d3eb665eb0a50c4a6710c33ab416799db8d239d9ae3c8ac,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1724094286280862400,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-qc986,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddd55849-ad70-4d7f-be62-906aaf700473,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03af676235e031c10f688c46fd57ecb358c20b75b53dcf44e8436d0e9e42219b,PodSandboxId:8d62286b596ae7ae3bb45da725ac24e979b16240b3d5a407d984506dd1808e96,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1724094286198419107,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-87rkf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12da4ea6-f37f-4870-8cb4-180985d63872,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378a8b2ff7839642c07be5f716d416df44be569960a8b842c4024ccbdb8810ee,PodSandboxId:14b835d2d3cc49c9e8b143d3c46547d25ba5f0cdfa757da9991d70b70c8a0914,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724094283181048007,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx-svc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e23cea6e-b5eb-44d1-a60a-0589faee104e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protoc
ol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a899dd7dca094e152117eadd14bb91074c9c4aa534330047b64090ddad9fa6c,PodSandboxId:d5aadb9b533271e0a357e7520d014ed5942230b385828114b9c3248206e88431,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094268749385859,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c390b22-3b82-4f13-8bd2-883299635128,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf9
61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addc2be73611d1820ae0fae725f50f7cd70b202166835dd71d2aa96b2c29c119,PodSandboxId:884f14efe28ff7762dd1d895f1cac862fe86a4f47d4b5087267d0a10c4a57186,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094261513265010,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-52mzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64e13bd0-64ee-43c5-9bf7-0f8a1490ccaf,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.port
s: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c87dfccf44c0a9ad08117a979954b7e83794f89753437edb3fd927a8296db1,PodSandboxId:215a00db7c461089938fc0a76c38e700a413626e61154d2d5595da42036ca911,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094261453160678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x9tpk,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff3163a-9790-4873-862c-ee56003d6dbe,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbcd3c117d8bdf5b50083ec0e9c7fa5e2677bda544ee54c22a5d8923678c3dbf,PodSandboxId:24d9bc707d8e69aeab0177f45962c09b3f02f9855b599a9cb4a63e673a778c41,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf
049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094261115152837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kmnjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb61258-2c36-4fad-9aef-e65082f0de2b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6476e5a22ef93b76c0b8c58a141675e989088f3d9f4401e04b70a2f61b3406f,PodSandboxId:214dd80386fccbd1d44e0e6b6f3eac5e607dfcff59c463a4bb0d06e70da50636,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Sta
te:CONTAINER_RUNNING,CreatedAt:1724094249957711039,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c5ddbfa63eba68c95c71e04faedf04f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6aa29eb1529a85268d2dc5ce08077e347ca5b605f24e5eb0744327e929fa7b,PodSandboxId:27cda86b040703226e941720bcacbecfd376ae746dcf345d04c7fbc2527b5913,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,Cr
eatedAt:1724094249724618953,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d81c5d63cba07001a82e239314e39e2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16698665309e7da4c9b8540c92ca701d0b76719aef9f29d0bb14f2c7bcdbe6dc,PodSandboxId:c9190396cb93fda2c83b800dea9a40e33b984c4e39122fed0038598dbf28ca2d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:6,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094249720901629,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:159f792fcd37ab7e1963c24a9200fdbd432900fc07886593c5806b2259146463,PodSandboxId:b7a0dd2fb52fc2a8e5ef493c7375fbf299ac57f940a4b811c4a36bdc9e667ae8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:16,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094249732261616,Labels:m
ap[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850dfb841735db872ffa11ffc3c51b30121a8ae31433e69da31971d06c5940f4,PodSandboxId:27cda86b040703226e941720bcacbecfd376ae746dcf345d04c7fbc2527b5913,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724094246503274329,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d81c5d63cba07001a82e239314e39e2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d8d75cdcbf68e1780941055e907547ce2068e258c35d479e7553dff289bee3,PodSandboxId:de331608570cf7c9e5022ffca3b8037574fc16a7a613f6545dd6f557609be181,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:5,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724094245442698868,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53e73ff89e97c5f981e8291a8f62ab6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9,PodSandboxId:1b98c8cb37fd8c65bba30881db672df4abbaaffba2a306a4d3446a712c2b9e2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094164329234273,Labels:map[string]string{io.kubernetes.container.na
me: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-124593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c71ff42fdd5902541920b0f91ca1cbbc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad665059-6e6f-4dae-98a2-58c6728c45af name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	759c34d5180be       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   7 seconds ago       Exited              mount-munger              0                   fd9df0fecbf26       busybox-mount
	3552b8b8a58e1       docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add       9 seconds ago       Exited              myfrontend                0                   717d587d66e8c       sp-pod
	067feceb65add       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969    16 seconds ago      Running             echoserver                0                   0a3337fc6632f       hello-node-6b9f76b5c7-qc986
	03af676235e03       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969    16 seconds ago      Running             echoserver                0                   8d62286b596ae       hello-node-connect-67bdd5bbb4-87rkf
	378a8b2ff7839       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0       19 seconds ago      Running             nginx                     0                   14b835d2d3cc4       nginx-svc
	1a899dd7dca09       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      34 seconds ago      Running             storage-provisioner       0                   d5aadb9b53327       storage-provisioner
	addc2be73611d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      41 seconds ago      Running             coredns                   0                   884f14efe28ff       coredns-6f6b679f8f-52mzn
	f1c87dfccf44c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      41 seconds ago      Running             coredns                   0                   215a00db7c461       coredns-6f6b679f8f-x9tpk
	cbcd3c117d8bd       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      41 seconds ago      Running             kube-proxy                0                   24d9bc707d8e6       kube-proxy-kmnjp
	b6476e5a22ef9       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      53 seconds ago      Running             kube-apiserver            0                   214dd80386fcc       kube-apiserver-functional-124593
	159f792fcd37a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      53 seconds ago      Running             kube-controller-manager   16                  b7a0dd2fb52fc       kube-controller-manager-functional-124593
	4b6aa29eb1529       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      53 seconds ago      Running             etcd                      3                   27cda86b04070       etcd-functional-124593
	16698665309e7       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      53 seconds ago      Running             kube-scheduler            6                   c9190396cb93f       kube-scheduler-functional-124593
	850dfb841735d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      56 seconds ago      Exited              etcd                      2                   27cda86b04070       etcd-functional-124593
	d1d8d75cdcbf6       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      57 seconds ago      Exited              kube-scheduler            5                   de331608570cf       kube-scheduler-functional-124593
	e764198234f75       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago       Exited              kube-controller-manager   15                  1b98c8cb37fd8       kube-controller-manager-functional-124593
	
	
	==> coredns [addc2be73611d1820ae0fae725f50f7cd70b202166835dd71d2aa96b2c29c119] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f1c87dfccf44c0a9ad08117a979954b7e83794f89753437edb3fd927a8296db1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               functional-124593
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-124593
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:04:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-124593
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:04:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:04:23 +0000   Mon, 19 Aug 2024 19:04:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:04:23 +0000   Mon, 19 Aug 2024 19:04:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:04:23 +0000   Mon, 19 Aug 2024 19:04:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:04:23 +0000   Mon, 19 Aug 2024 19:04:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    functional-124593
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 805a1ed3aa0b462c9eb530c0f272faab
	  System UUID:                805a1ed3-aa0b-462c-9eb5-30c0f272faab
	  Boot ID:                    b0b74df2-d084-4232-834a-fab00137a50c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-qc986                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  default                     hello-node-connect-67bdd5bbb4-87rkf          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 coredns-6f6b679f8f-52mzn                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     43s
	  kube-system                 coredns-6f6b679f8f-x9tpk                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     43s
	  kube-system                 etcd-functional-124593                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         41s
	  kube-system                 kube-apiserver-functional-124593             250m (12%)    0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-controller-manager-functional-124593    200m (10%)    0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-proxy-kmnjp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-scheduler-functional-124593             100m (5%)     0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-9plsc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-n2fqj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             240Mi (6%)  340Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 42s                kube-proxy       
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node functional-124593 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node functional-124593 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x7 over 54s)  kubelet          Node functional-124593 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  54s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           44s                node-controller  Node functional-124593 event: Registered Node functional-124593 in Controller
	
	
	==> dmesg <==
	[  +0.217290] systemd-fstab-generator[3183]: Ignoring "noauto" option for root device
	[  +0.371422] systemd-fstab-generator[3242]: Ignoring "noauto" option for root device
	[Aug19 18:51] systemd-fstab-generator[3507]: Ignoring "noauto" option for root device
	[  +0.085616] kauditd_printk_skb: 184 callbacks suppressed
	[  +1.984129] systemd-fstab-generator[3627]: Ignoring "noauto" option for root device
	[Aug19 18:52] kauditd_printk_skb: 81 callbacks suppressed
	[Aug19 18:55] systemd-fstab-generator[9158]: Ignoring "noauto" option for root device
	[Aug19 18:56] kauditd_printk_skb: 70 callbacks suppressed
	[Aug19 18:59] systemd-fstab-generator[10102]: Ignoring "noauto" option for root device
	[Aug19 19:00] kauditd_printk_skb: 54 callbacks suppressed
	[Aug19 19:04] systemd-fstab-generator[12074]: Ignoring "noauto" option for root device
	[  +0.122804] systemd-fstab-generator[12086]: Ignoring "noauto" option for root device
	[  +0.205188] systemd-fstab-generator[12104]: Ignoring "noauto" option for root device
	[  +0.215955] systemd-fstab-generator[12210]: Ignoring "noauto" option for root device
	[  +0.374172] systemd-fstab-generator[12342]: Ignoring "noauto" option for root device
	[  +1.017698] systemd-fstab-generator[12670]: Ignoring "noauto" option for root device
	[  +2.226024] systemd-fstab-generator[12810]: Ignoring "noauto" option for root device
	[  +0.965174] kauditd_printk_skb: 226 callbacks suppressed
	[ +17.731104] systemd-fstab-generator[13574]: Ignoring "noauto" option for root device
	[  +0.085680] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.479676] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.209005] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.073821] kauditd_printk_skb: 38 callbacks suppressed
	[  +9.466449] kauditd_printk_skb: 20 callbacks suppressed
	[Aug19 19:05] kauditd_printk_skb: 50 callbacks suppressed
	
	
	==> etcd [4b6aa29eb1529a85268d2dc5ce08077e347ca5b605f24e5eb0744327e929fa7b] <==
	{"level":"info","ts":"2024-08-19T19:04:11.923547Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"cde0bb267fc4e559","local-member-attributes":"{Name:functional-124593 ClientURLs:[https://192.168.39.22:2379]}","request-path":"/0/members/cde0bb267fc4e559/attributes","cluster-id":"eaed0234649c774e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T19:04:11.923634Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:04:11.923674Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:04:11.924817Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:04:11.923785Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T19:04:11.925234Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T19:04:11.923801Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:04:11.925382Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"eaed0234649c774e","local-member-id":"cde0bb267fc4e559","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:04:11.925454Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:04:11.925525Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:04:11.925865Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.22:2379"}
	{"level":"info","ts":"2024-08-19T19:04:11.926046Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:04:11.926859Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T19:04:51.467258Z","caller":"traceutil/trace.go:171","msg":"trace[1333357089] linearizableReadLoop","detail":"{readStateIndex:539; appliedIndex:538; }","duration":"469.254076ms","start":"2024-08-19T19:04:50.997990Z","end":"2024-08-19T19:04:51.467244Z","steps":["trace[1333357089] 'read index received'  (duration: 469.115532ms)","trace[1333357089] 'applied index is now lower than readState.Index'  (duration: 138.1µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T19:04:51.467555Z","caller":"traceutil/trace.go:171","msg":"trace[1686795876] transaction","detail":"{read_only:false; response_revision:523; number_of_response:1; }","duration":"496.841839ms","start":"2024-08-19T19:04:50.970702Z","end":"2024-08-19T19:04:51.467544Z","steps":["trace[1686795876] 'process raft request'  (duration: 496.446753ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:04:51.467875Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T19:04:50.970687Z","time spent":"496.892308ms","remote":"127.0.0.1:53232","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:522 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-19T19:04:51.468015Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"470.029038ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T19:04:51.468042Z","caller":"traceutil/trace.go:171","msg":"trace[1461578684] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:523; }","duration":"470.054924ms","start":"2024-08-19T19:04:50.997980Z","end":"2024-08-19T19:04:51.468034Z","steps":["trace[1461578684] 'agreement among raft nodes before linearized reading'  (duration: 470.01479ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:04:51.468156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"451.243244ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T19:04:51.468170Z","caller":"traceutil/trace.go:171","msg":"trace[429280561] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:523; }","duration":"451.257139ms","start":"2024-08-19T19:04:51.016908Z","end":"2024-08-19T19:04:51.468165Z","steps":["trace[429280561] 'agreement among raft nodes before linearized reading'  (duration: 451.232557ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:04:51.468187Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T19:04:51.016874Z","time spent":"451.308055ms","remote":"127.0.0.1:53252","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-19T19:04:59.966322Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"316.150941ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T19:04:59.966397Z","caller":"traceutil/trace.go:171","msg":"trace[1438958896] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:605; }","duration":"316.233861ms","start":"2024-08-19T19:04:59.650150Z","end":"2024-08-19T19:04:59.966384Z","steps":["trace[1438958896] 'range keys from in-memory index tree'  (duration: 316.031258ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:04:59.966420Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T19:04:59.650110Z","time spent":"316.303473ms","remote":"127.0.0.1:53032","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-08-19T19:05:02.153062Z","caller":"traceutil/trace.go:171","msg":"trace[2086859437] transaction","detail":"{read_only:false; response_revision:615; number_of_response:1; }","duration":"128.407568ms","start":"2024-08-19T19:05:02.024641Z","end":"2024-08-19T19:05:02.153048Z","steps":["trace[2086859437] 'process raft request'  (duration: 127.336229ms)"],"step_count":1}
	
	
	==> etcd [850dfb841735db872ffa11ffc3c51b30121a8ae31433e69da31971d06c5940f4] <==
	{"level":"info","ts":"2024-08-19T19:04:06.651933Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-124593","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.22:2380"],"listen-peer-urls":["https://192.168.39.22:2380"],"advertise-client-urls":["https://192.168.39.22:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.22:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-c
luster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-08-19T19:04:06.659029Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"6.826726ms"}
	{"level":"info","ts":"2024-08-19T19:04:06.659782Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-19T19:04:06.660829Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"eaed0234649c774e","local-member-id":"cde0bb267fc4e559","commit-index":1}
	{"level":"info","ts":"2024-08-19T19:04:06.660924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-19T19:04:06.660984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 became follower at term 1"}
	{"level":"info","ts":"2024-08-19T19:04:06.661065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft cde0bb267fc4e559 [peers: [], term: 1, commit: 1, applied: 0, lastindex: 1, lastterm: 1]"}
	{"level":"warn","ts":"2024-08-19T19:04:06.662978Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-19T19:04:06.664870Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":1}
	{"level":"info","ts":"2024-08-19T19:04:06.667216Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-19T19:04:06.673394Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"cde0bb267fc4e559","timeout":"7s"}
	{"level":"info","ts":"2024-08-19T19:04:06.673475Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"cde0bb267fc4e559"}
	{"level":"info","ts":"2024-08-19T19:04:06.673540Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"cde0bb267fc4e559","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-19T19:04:06.674083Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:04:06.675362Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-19T19:04:06.675563Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T19:04:06.675628Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T19:04:06.675640Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T19:04:06.675873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 switched to configuration voters=(14835062946585175385)"}
	{"level":"info","ts":"2024-08-19T19:04:06.675943Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"eaed0234649c774e","local-member-id":"cde0bb267fc4e559","added-peer-id":"cde0bb267fc4e559","added-peer-peer-urls":["https://192.168.39.22:2380"]}
	{"level":"info","ts":"2024-08-19T19:04:06.678330Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T19:04:06.679858Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2024-08-19T19:04:06.679878Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2024-08-19T19:04:06.680077Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"cde0bb267fc4e559","initial-advertise-peer-urls":["https://192.168.39.22:2380"],"listen-peer-urls":["https://192.168.39.22:2380"],"advertise-client-urls":["https://192.168.39.22:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.22:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T19:04:06.680103Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> kernel <==
	 19:05:04 up 15 min,  0 users,  load average: 1.47, 0.45, 0.20
	Linux functional-124593 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b6476e5a22ef93b76c0b8c58a141675e989088f3d9f4401e04b70a2f61b3406f] <==
	I0819 19:04:13.231257       1 policy_source.go:224] refreshing policies
	I0819 19:04:13.288376       1 controller.go:615] quota admission added evaluator for: namespaces
	I0819 19:04:13.380165       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 19:04:14.091092       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0819 19:04:14.100416       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0819 19:04:14.100471       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 19:04:14.963252       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 19:04:15.012921       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 19:04:15.099110       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0819 19:04:15.108158       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.22]
	I0819 19:04:15.110065       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 19:04:15.116673       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 19:04:15.135039       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 19:04:17.569626       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 19:04:17.586566       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0819 19:04:17.597229       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 19:04:20.485767       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0819 19:04:20.686240       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0819 19:04:33.290354       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.120.128"}
	I0819 19:04:39.220414       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.106.199"}
	I0819 19:04:39.391591       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.72.177"}
	I0819 19:04:40.008568       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.156.199"}
	I0819 19:04:56.247439       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.126.166"}
	I0819 19:04:56.279452       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.145.136"}
	E0819 19:05:00.142133       1 conn.go:339] Error on socket receive: read tcp 192.168.39.22:8441->192.168.39.1:44540: use of closed network connection
	
	
	==> kube-controller-manager [159f792fcd37ab7e1963c24a9200fdbd432900fc07886593c5806b2259146463] <==
	I0819 19:04:46.608713       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="11.356324ms"
	I0819 19:04:46.609000       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="56.129µs"
	I0819 19:04:46.630307       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6b9f76b5c7" duration="9.136396ms"
	I0819 19:04:46.630615       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6b9f76b5c7" duration="48.054µs"
	I0819 19:04:56.069584       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="40.80509ms"
	E0819 19:04:56.069641       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 19:04:56.092824       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="19.659914ms"
	E0819 19:04:56.092865       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 19:04:56.092922       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="76.748577ms"
	E0819 19:04:56.092947       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 19:04:56.107794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="13.560639ms"
	E0819 19:04:56.107885       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 19:04:56.116241       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="22.189513ms"
	E0819 19:04:56.116289       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 19:04:56.127778       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="17.86176ms"
	E0819 19:04:56.127862       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 19:04:56.128778       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="11.115078ms"
	E0819 19:04:56.128821       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 19:04:56.167770       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="37.467028ms"
	I0819 19:04:56.186391       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="56.04754ms"
	I0819 19:04:56.202082       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="15.426575ms"
	I0819 19:04:56.202248       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="36.182µs"
	I0819 19:04:56.202311       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="34.488831ms"
	I0819 19:04:56.202420       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="31.68µs"
	I0819 19:04:56.225811       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="71.875µs"
	
	
	==> kube-controller-manager [e764198234f755ef159ee63baab91c8f8541e6f19023d3f6cdfecb10e9ab9ad9] <==
	I0819 19:02:44.745523       1 serving.go:386] Generated self-signed cert in-memory
	I0819 19:02:44.990908       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 19:02:44.990991       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:02:44.992289       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 19:02:44.992410       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 19:02:44.992616       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 19:02:44.992692       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0819 19:03:04.995138       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.22:8441/healthz\": dial tcp 192.168.39.22:8441: connect: connection refused"
	
	
	==> kube-proxy [cbcd3c117d8bdf5b50083ec0e9c7fa5e2677bda544ee54c22a5d8923678c3dbf] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 19:04:21.441128       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 19:04:21.463320       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.22"]
	E0819 19:04:21.463369       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 19:04:21.643012       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 19:04:21.643100       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 19:04:21.643125       1 server_linux.go:169] "Using iptables Proxier"
	I0819 19:04:21.646728       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 19:04:21.647084       1 server.go:483] "Version info" version="v1.31.0"
	I0819 19:04:21.647130       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:04:21.648390       1 config.go:197] "Starting service config controller"
	I0819 19:04:21.648455       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 19:04:21.648523       1 config.go:104] "Starting endpoint slice config controller"
	I0819 19:04:21.648579       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 19:04:21.649038       1 config.go:326] "Starting node config controller"
	I0819 19:04:21.655520       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 19:04:21.748609       1 shared_informer.go:320] Caches are synced for service config
	I0819 19:04:21.748660       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 19:04:21.756587       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [16698665309e7da4c9b8540c92ca701d0b76719aef9f29d0bb14f2c7bcdbe6dc] <==
	W0819 19:04:14.026958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 19:04:14.027071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:04:14.084629       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 19:04:14.084689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:04:14.092778       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 19:04:14.093131       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:04:14.136916       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 19:04:14.137030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:04:14.294040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 19:04:14.294077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:04:14.382741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 19:04:14.382830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:04:14.421551       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 19:04:14.421584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 19:04:14.523212       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 19:04:14.523386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:04:14.526873       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 19:04:14.527013       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 19:04:14.587854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 19:04:14.587902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 19:04:14.706889       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 19:04:14.706971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 19:04:14.727430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 19:04:14.727581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0819 19:04:17.458667       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d1d8d75cdcbf68e1780941055e907547ce2068e258c35d479e7553dff289bee3] <==
	
	
	==> kubelet <==
	Aug 19 19:04:56 functional-124593 kubelet[12817]: I0819 19:04:56.278427   12817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c8a3606c-aef1-4f31-95d8-c58d2914fe94-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-n2fqj\" (UID: \"c8a3606c-aef1-4f31-95d8-c58d2914fe94\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-n2fqj"
	Aug 19 19:04:56 functional-124593 kubelet[12817]: I0819 19:04:56.278441   12817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/aa85bac0-2fbf-4fbb-93f2-1294716658b8-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-9plsc\" (UID: \"aa85bac0-2fbf-4fbb-93f2-1294716658b8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-9plsc"
	Aug 19 19:04:56 functional-124593 kubelet[12817]: I0819 19:04:56.882288   12817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/23f3f036-ffe3-4d51-982f-d0ad3229b7f6-test-volume\") pod \"23f3f036-ffe3-4d51-982f-d0ad3229b7f6\" (UID: \"23f3f036-ffe3-4d51-982f-d0ad3229b7f6\") "
	Aug 19 19:04:56 functional-124593 kubelet[12817]: I0819 19:04:56.882357   12817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxvpx\" (UniqueName: \"kubernetes.io/projected/23f3f036-ffe3-4d51-982f-d0ad3229b7f6-kube-api-access-kxvpx\") pod \"23f3f036-ffe3-4d51-982f-d0ad3229b7f6\" (UID: \"23f3f036-ffe3-4d51-982f-d0ad3229b7f6\") "
	Aug 19 19:04:56 functional-124593 kubelet[12817]: I0819 19:04:56.882555   12817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23f3f036-ffe3-4d51-982f-d0ad3229b7f6-test-volume" (OuterVolumeSpecName: "test-volume") pod "23f3f036-ffe3-4d51-982f-d0ad3229b7f6" (UID: "23f3f036-ffe3-4d51-982f-d0ad3229b7f6"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 19 19:04:56 functional-124593 kubelet[12817]: I0819 19:04:56.884890   12817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f3f036-ffe3-4d51-982f-d0ad3229b7f6-kube-api-access-kxvpx" (OuterVolumeSpecName: "kube-api-access-kxvpx") pod "23f3f036-ffe3-4d51-982f-d0ad3229b7f6" (UID: "23f3f036-ffe3-4d51-982f-d0ad3229b7f6"). InnerVolumeSpecName "kube-api-access-kxvpx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 19:04:56 functional-124593 kubelet[12817]: I0819 19:04:56.983286   12817 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kxvpx\" (UniqueName: \"kubernetes.io/projected/23f3f036-ffe3-4d51-982f-d0ad3229b7f6-kube-api-access-kxvpx\") on node \"functional-124593\" DevicePath \"\""
	Aug 19 19:04:56 functional-124593 kubelet[12817]: I0819 19:04:56.983373   12817 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/23f3f036-ffe3-4d51-982f-d0ad3229b7f6-test-volume\") on node \"functional-124593\" DevicePath \"\""
	Aug 19 19:04:57 functional-124593 kubelet[12817]: I0819 19:04:57.705846   12817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd9df0fecbf26dc6e1a2a32343a647d410abf73cdfbb89f3d49132fecd48348d"
	Aug 19 19:04:59 functional-124593 kubelet[12817]: E0819 19:04:59.340979   12817 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094299339139083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:244936,},InodesUsed:&UInt64Value{Value:101,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:04:59 functional-124593 kubelet[12817]: E0819 19:04:59.341006   12817 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094299339139083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:244936,},InodesUsed:&UInt64Value{Value:101,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:05:00 functional-124593 kubelet[12817]: I0819 19:05:00.925647   12817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mypd\" (UniqueName: \"kubernetes.io/host-path/b91548ea-a975-4e04-8fd6-5d7432f47df9-pvc-67e237db-1810-44dc-8123-64ca17f4ba8d\") pod \"b91548ea-a975-4e04-8fd6-5d7432f47df9\" (UID: \"b91548ea-a975-4e04-8fd6-5d7432f47df9\") "
	Aug 19 19:05:00 functional-124593 kubelet[12817]: I0819 19:05:00.925695   12817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c49ch\" (UniqueName: \"kubernetes.io/projected/b91548ea-a975-4e04-8fd6-5d7432f47df9-kube-api-access-c49ch\") pod \"b91548ea-a975-4e04-8fd6-5d7432f47df9\" (UID: \"b91548ea-a975-4e04-8fd6-5d7432f47df9\") "
	Aug 19 19:05:00 functional-124593 kubelet[12817]: I0819 19:05:00.926061   12817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b91548ea-a975-4e04-8fd6-5d7432f47df9-pvc-67e237db-1810-44dc-8123-64ca17f4ba8d" (OuterVolumeSpecName: "mypd") pod "b91548ea-a975-4e04-8fd6-5d7432f47df9" (UID: "b91548ea-a975-4e04-8fd6-5d7432f47df9"). InnerVolumeSpecName "pvc-67e237db-1810-44dc-8123-64ca17f4ba8d". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 19 19:05:00 functional-124593 kubelet[12817]: I0819 19:05:00.932726   12817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b91548ea-a975-4e04-8fd6-5d7432f47df9-kube-api-access-c49ch" (OuterVolumeSpecName: "kube-api-access-c49ch") pod "b91548ea-a975-4e04-8fd6-5d7432f47df9" (UID: "b91548ea-a975-4e04-8fd6-5d7432f47df9"). InnerVolumeSpecName "kube-api-access-c49ch". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 19:05:01 functional-124593 kubelet[12817]: I0819 19:05:01.026462   12817 reconciler_common.go:288] "Volume detached for volume \"pvc-67e237db-1810-44dc-8123-64ca17f4ba8d\" (UniqueName: \"kubernetes.io/host-path/b91548ea-a975-4e04-8fd6-5d7432f47df9-pvc-67e237db-1810-44dc-8123-64ca17f4ba8d\") on node \"functional-124593\" DevicePath \"\""
	Aug 19 19:05:01 functional-124593 kubelet[12817]: I0819 19:05:01.026524   12817 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-c49ch\" (UniqueName: \"kubernetes.io/projected/b91548ea-a975-4e04-8fd6-5d7432f47df9-kube-api-access-c49ch\") on node \"functional-124593\" DevicePath \"\""
	Aug 19 19:05:01 functional-124593 kubelet[12817]: I0819 19:05:01.832415   12817 scope.go:117] "RemoveContainer" containerID="3552b8b8a58e188db056bf32bdb200367014a4315ef28fc3152195c663fa63c5"
	Aug 19 19:05:02 functional-124593 kubelet[12817]: E0819 19:05:02.002082   12817 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b91548ea-a975-4e04-8fd6-5d7432f47df9" containerName="myfrontend"
	Aug 19 19:05:02 functional-124593 kubelet[12817]: E0819 19:05:02.002113   12817 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23f3f036-ffe3-4d51-982f-d0ad3229b7f6" containerName="mount-munger"
	Aug 19 19:05:02 functional-124593 kubelet[12817]: I0819 19:05:02.002142   12817 memory_manager.go:354] "RemoveStaleState removing state" podUID="b91548ea-a975-4e04-8fd6-5d7432f47df9" containerName="myfrontend"
	Aug 19 19:05:02 functional-124593 kubelet[12817]: I0819 19:05:02.002148   12817 memory_manager.go:354] "RemoveStaleState removing state" podUID="23f3f036-ffe3-4d51-982f-d0ad3229b7f6" containerName="mount-munger"
	Aug 19 19:05:02 functional-124593 kubelet[12817]: I0819 19:05:02.236707   12817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scqb7\" (UniqueName: \"kubernetes.io/projected/5eeb6eab-df59-4e48-953f-94784b2f45ae-kube-api-access-scqb7\") pod \"sp-pod\" (UID: \"5eeb6eab-df59-4e48-953f-94784b2f45ae\") " pod="default/sp-pod"
	Aug 19 19:05:02 functional-124593 kubelet[12817]: I0819 19:05:02.236754   12817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-67e237db-1810-44dc-8123-64ca17f4ba8d\" (UniqueName: \"kubernetes.io/host-path/5eeb6eab-df59-4e48-953f-94784b2f45ae-pvc-67e237db-1810-44dc-8123-64ca17f4ba8d\") pod \"sp-pod\" (UID: \"5eeb6eab-df59-4e48-953f-94784b2f45ae\") " pod="default/sp-pod"
	Aug 19 19:05:03 functional-124593 kubelet[12817]: I0819 19:05:03.287923   12817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b91548ea-a975-4e04-8fd6-5d7432f47df9" path="/var/lib/kubelet/pods/b91548ea-a975-4e04-8fd6-5d7432f47df9/volumes"
	
	
	==> storage-provisioner [1a899dd7dca094e152117eadd14bb91074c9c4aa534330047b64090ddad9fa6c] <==
	I0819 19:04:28.824657       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 19:04:28.836196       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 19:04:28.836289       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 19:04:28.847450       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 19:04:28.847724       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-124593_cd234fc3-6b6e-4733-8871-28f0955a5507!
	I0819 19:04:28.849023       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a0bf4060-1474-4b1b-b3dc-6f7e174808f7", APIVersion:"v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-124593_cd234fc3-6b6e-4733-8871-28f0955a5507 became leader
	I0819 19:04:28.948708       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-124593_cd234fc3-6b6e-4733-8871-28f0955a5507!
	I0819 19:04:45.124382       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0819 19:04:45.124444       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    cc09aaa3-ae4b-4c93-8362-b5d5ca97cead 372 0 2024-08-19 19:04:28 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-19 19:04:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-67e237db-1810-44dc-8123-64ca17f4ba8d &PersistentVolumeClaim{ObjectMeta:{myclaim  default  67e237db-1810-44dc-8123-64ca17f4ba8d 491 0 2024-08-19 19:04:45 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-19 19:04:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-19 19:04:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0819 19:04:45.124927       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-67e237db-1810-44dc-8123-64ca17f4ba8d" provisioned
	I0819 19:04:45.124942       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0819 19:04:45.124947       1 volume_store.go:212] Trying to save persistentvolume "pvc-67e237db-1810-44dc-8123-64ca17f4ba8d"
	I0819 19:04:45.126782       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"67e237db-1810-44dc-8123-64ca17f4ba8d", APIVersion:"v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0819 19:04:45.144794       1 volume_store.go:219] persistentvolume "pvc-67e237db-1810-44dc-8123-64ca17f4ba8d" saved
	I0819 19:04:45.147331       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"67e237db-1810-44dc-8123-64ca17f4ba8d", APIVersion:"v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-67e237db-1810-44dc-8123-64ca17f4ba8d
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-124593 -n functional-124593
helpers_test.go:261: (dbg) Run:  kubectl --context functional-124593 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount sp-pod dashboard-metrics-scraper-c5db448b4-9plsc
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/NodeLabels]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-124593 describe pod busybox-mount sp-pod dashboard-metrics-scraper-c5db448b4-9plsc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-124593 describe pod busybox-mount sp-pod dashboard-metrics-scraper-c5db448b4-9plsc: exit status 1 (93.607374ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-124593/192.168.39.22
	Start Time:       Mon, 19 Aug 2024 19:04:52 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  cri-o://759c34d5180bee46c09515939458cc9ad0bf564dc123f4d1f669e1bed0c46057
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 19 Aug 2024 19:04:55 +0000
	      Finished:     Mon, 19 Aug 2024 19:04:55 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kxvpx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-kxvpx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  13s   default-scheduler  Successfully assigned default/busybox-mount to functional-124593
	  Normal  Pulling    12s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.746s (1.746s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10s   kubelet            Created container mount-munger
	  Normal  Started    10s   kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-124593/192.168.39.22
	Start Time:       Mon, 19 Aug 2024 19:05:02 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-scqb7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-scqb7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/sp-pod to functional-124593
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-c5db448b4-9plsc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-124593 describe pod busybox-mount sp-pod dashboard-metrics-scraper-c5db448b4-9plsc: exit status 1
--- FAIL: TestFunctional/parallel/NodeLabels (3.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 node stop m02 -v=7 --alsologtostderr
E0819 19:10:19.939257  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:11:00.900657  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-163902 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.483976226s)

                                                
                                                
-- stdout --
	* Stopping node "ha-163902-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:10:06.821935  456051 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:10:06.822077  456051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:10:06.822087  456051 out.go:358] Setting ErrFile to fd 2...
	I0819 19:10:06.822091  456051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:10:06.822306  456051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:10:06.822551  456051 mustload.go:65] Loading cluster: ha-163902
	I0819 19:10:06.822914  456051 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:10:06.822929  456051 stop.go:39] StopHost: ha-163902-m02
	I0819 19:10:06.823340  456051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:10:06.823394  456051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:10:06.839296  456051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42289
	I0819 19:10:06.839808  456051 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:10:06.840400  456051 main.go:141] libmachine: Using API Version  1
	I0819 19:10:06.840427  456051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:10:06.840746  456051 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:10:06.842998  456051 out.go:177] * Stopping node "ha-163902-m02"  ...
	I0819 19:10:06.844012  456051 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 19:10:06.844043  456051 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:10:06.844295  456051 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 19:10:06.844331  456051 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:10:06.847500  456051 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:10:06.847978  456051 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:10:06.848007  456051 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:10:06.848201  456051 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:10:06.848405  456051 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:10:06.848616  456051 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:10:06.848801  456051 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa Username:docker}
	I0819 19:10:06.931831  456051 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 19:10:06.984827  456051 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 19:10:07.039156  456051 main.go:141] libmachine: Stopping "ha-163902-m02"...
	I0819 19:10:07.039208  456051 main.go:141] libmachine: (ha-163902-m02) Calling .GetState
	I0819 19:10:07.040890  456051 main.go:141] libmachine: (ha-163902-m02) Calling .Stop
	I0819 19:10:07.044436  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 0/120
	I0819 19:10:08.047107  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 1/120
	I0819 19:10:09.048356  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 2/120
	I0819 19:10:10.049800  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 3/120
	I0819 19:10:11.051827  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 4/120
	I0819 19:10:12.054168  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 5/120
	I0819 19:10:13.055592  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 6/120
	I0819 19:10:14.056948  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 7/120
	I0819 19:10:15.059010  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 8/120
	I0819 19:10:16.060575  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 9/120
	I0819 19:10:17.062483  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 10/120
	I0819 19:10:18.064088  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 11/120
	I0819 19:10:19.066158  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 12/120
	I0819 19:10:20.067990  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 13/120
	I0819 19:10:21.069588  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 14/120
	I0819 19:10:22.071407  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 15/120
	I0819 19:10:23.072944  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 16/120
	I0819 19:10:24.074747  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 17/120
	I0819 19:10:25.076766  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 18/120
	I0819 19:10:26.078619  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 19/120
	I0819 19:10:27.080914  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 20/120
	I0819 19:10:28.082340  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 21/120
	I0819 19:10:29.084342  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 22/120
	I0819 19:10:30.085896  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 23/120
	I0819 19:10:31.088012  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 24/120
	I0819 19:10:32.090915  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 25/120
	I0819 19:10:33.093278  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 26/120
	I0819 19:10:34.094663  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 27/120
	I0819 19:10:35.096224  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 28/120
	I0819 19:10:36.097738  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 29/120
	I0819 19:10:37.099918  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 30/120
	I0819 19:10:38.101629  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 31/120
	I0819 19:10:39.103745  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 32/120
	I0819 19:10:40.106186  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 33/120
	I0819 19:10:41.107846  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 34/120
	I0819 19:10:42.110258  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 35/120
	I0819 19:10:43.112076  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 36/120
	I0819 19:10:44.113674  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 37/120
	I0819 19:10:45.115696  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 38/120
	I0819 19:10:46.117324  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 39/120
	I0819 19:10:47.120034  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 40/120
	I0819 19:10:48.121626  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 41/120
	I0819 19:10:49.123886  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 42/120
	I0819 19:10:50.125399  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 43/120
	I0819 19:10:51.127877  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 44/120
	I0819 19:10:52.130152  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 45/120
	I0819 19:10:53.131806  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 46/120
	I0819 19:10:54.133413  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 47/120
	I0819 19:10:55.135679  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 48/120
	I0819 19:10:56.137248  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 49/120
	I0819 19:10:57.139646  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 50/120
	I0819 19:10:58.142287  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 51/120
	I0819 19:10:59.143997  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 52/120
	I0819 19:11:00.145602  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 53/120
	I0819 19:11:01.147675  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 54/120
	I0819 19:11:02.149882  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 55/120
	I0819 19:11:03.151876  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 56/120
	I0819 19:11:04.153307  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 57/120
	I0819 19:11:05.155187  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 58/120
	I0819 19:11:06.156662  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 59/120
	I0819 19:11:07.159118  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 60/120
	I0819 19:11:08.160809  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 61/120
	I0819 19:11:09.162286  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 62/120
	I0819 19:11:10.163929  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 63/120
	I0819 19:11:11.165436  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 64/120
	I0819 19:11:12.167746  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 65/120
	I0819 19:11:13.169250  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 66/120
	I0819 19:11:14.170819  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 67/120
	I0819 19:11:15.172379  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 68/120
	I0819 19:11:16.173902  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 69/120
	I0819 19:11:17.175442  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 70/120
	I0819 19:11:18.176906  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 71/120
	I0819 19:11:19.178563  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 72/120
	I0819 19:11:20.180170  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 73/120
	I0819 19:11:21.181549  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 74/120
	I0819 19:11:22.183827  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 75/120
	I0819 19:11:23.185512  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 76/120
	I0819 19:11:24.187728  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 77/120
	I0819 19:11:25.189802  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 78/120
	I0819 19:11:26.191613  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 79/120
	I0819 19:11:27.194110  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 80/120
	I0819 19:11:28.195759  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 81/120
	I0819 19:11:29.197333  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 82/120
	I0819 19:11:30.199992  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 83/120
	I0819 19:11:31.201911  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 84/120
	I0819 19:11:32.203734  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 85/120
	I0819 19:11:33.205172  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 86/120
	I0819 19:11:34.206763  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 87/120
	I0819 19:11:35.208133  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 88/120
	I0819 19:11:36.209887  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 89/120
	I0819 19:11:37.212802  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 90/120
	I0819 19:11:38.214635  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 91/120
	I0819 19:11:39.216548  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 92/120
	I0819 19:11:40.218296  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 93/120
	I0819 19:11:41.219767  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 94/120
	I0819 19:11:42.222004  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 95/120
	I0819 19:11:43.223438  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 96/120
	I0819 19:11:44.224915  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 97/120
	I0819 19:11:45.226464  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 98/120
	I0819 19:11:46.227931  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 99/120
	I0819 19:11:47.229611  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 100/120
	I0819 19:11:48.231201  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 101/120
	I0819 19:11:49.232908  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 102/120
	I0819 19:11:50.234508  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 103/120
	I0819 19:11:51.236090  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 104/120
	I0819 19:11:52.238221  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 105/120
	I0819 19:11:53.239615  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 106/120
	I0819 19:11:54.241039  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 107/120
	I0819 19:11:55.242391  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 108/120
	I0819 19:11:56.243913  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 109/120
	I0819 19:11:57.246080  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 110/120
	I0819 19:11:58.247473  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 111/120
	I0819 19:11:59.249190  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 112/120
	I0819 19:12:00.250935  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 113/120
	I0819 19:12:01.252418  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 114/120
	I0819 19:12:02.254478  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 115/120
	I0819 19:12:03.256125  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 116/120
	I0819 19:12:04.257454  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 117/120
	I0819 19:12:05.258902  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 118/120
	I0819 19:12:06.260288  456051 main.go:141] libmachine: (ha-163902-m02) Waiting for machine to stop 119/120
	I0819 19:12:07.260902  456051 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 19:12:07.261073  456051 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-163902 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr
E0819 19:12:22.822575  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr: exit status 3 (19.146465925s)

                                                
                                                
-- stdout --
	ha-163902
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-163902-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:12:07.308070  456487 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:12:07.308253  456487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:12:07.308264  456487 out.go:358] Setting ErrFile to fd 2...
	I0819 19:12:07.308271  456487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:12:07.308899  456487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:12:07.309306  456487 out.go:352] Setting JSON to false
	I0819 19:12:07.309346  456487 mustload.go:65] Loading cluster: ha-163902
	I0819 19:12:07.309581  456487 notify.go:220] Checking for updates...
	I0819 19:12:07.310324  456487 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:07.310356  456487 status.go:255] checking status of ha-163902 ...
	I0819 19:12:07.310870  456487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:07.310916  456487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:07.327492  456487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37667
	I0819 19:12:07.328021  456487 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:07.328749  456487 main.go:141] libmachine: Using API Version  1
	I0819 19:12:07.328787  456487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:07.329172  456487 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:07.329450  456487 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:12:07.331111  456487 status.go:330] ha-163902 host status = "Running" (err=<nil>)
	I0819 19:12:07.331135  456487 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:12:07.331572  456487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:07.331629  456487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:07.348383  456487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36521
	I0819 19:12:07.348899  456487 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:07.349430  456487 main.go:141] libmachine: Using API Version  1
	I0819 19:12:07.349459  456487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:07.349855  456487 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:07.350063  456487 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:12:07.353385  456487 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:07.353914  456487 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:12:07.353944  456487 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:07.354117  456487 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:12:07.354427  456487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:07.354473  456487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:07.369824  456487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45381
	I0819 19:12:07.370298  456487 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:07.370853  456487 main.go:141] libmachine: Using API Version  1
	I0819 19:12:07.370884  456487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:07.371294  456487 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:07.371467  456487 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:12:07.371661  456487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:07.371688  456487 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:12:07.375249  456487 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:07.375783  456487 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:12:07.375804  456487 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:07.375838  456487 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:12:07.376053  456487 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:12:07.376232  456487 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:12:07.376397  456487 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:12:07.460631  456487 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:07.466766  456487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:12:07.482289  456487 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:12:07.482332  456487 api_server.go:166] Checking apiserver status ...
	I0819 19:12:07.482368  456487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:07.497927  456487 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup
	W0819 19:12:07.509237  456487 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:07.509309  456487 ssh_runner.go:195] Run: ls
	I0819 19:12:07.513577  456487 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:12:07.518018  456487 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:12:07.518048  456487 status.go:422] ha-163902 apiserver status = Running (err=<nil>)
	I0819 19:12:07.518061  456487 status.go:257] ha-163902 status: &{Name:ha-163902 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:12:07.518084  456487 status.go:255] checking status of ha-163902-m02 ...
	I0819 19:12:07.518438  456487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:07.518464  456487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:07.534718  456487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36627
	I0819 19:12:07.535174  456487 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:07.535686  456487 main.go:141] libmachine: Using API Version  1
	I0819 19:12:07.535708  456487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:07.535999  456487 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:07.536167  456487 main.go:141] libmachine: (ha-163902-m02) Calling .GetState
	I0819 19:12:07.537733  456487 status.go:330] ha-163902-m02 host status = "Running" (err=<nil>)
	I0819 19:12:07.537756  456487 host.go:66] Checking if "ha-163902-m02" exists ...
	I0819 19:12:07.538066  456487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:07.538095  456487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:07.553447  456487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37527
	I0819 19:12:07.553995  456487 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:07.554607  456487 main.go:141] libmachine: Using API Version  1
	I0819 19:12:07.554632  456487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:07.554964  456487 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:07.555164  456487 main.go:141] libmachine: (ha-163902-m02) Calling .GetIP
	I0819 19:12:07.557831  456487 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:07.558254  456487 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:12:07.558292  456487 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:07.558478  456487 host.go:66] Checking if "ha-163902-m02" exists ...
	I0819 19:12:07.558777  456487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:07.558806  456487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:07.573837  456487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45245
	I0819 19:12:07.574327  456487 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:07.574877  456487 main.go:141] libmachine: Using API Version  1
	I0819 19:12:07.574904  456487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:07.575213  456487 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:07.575396  456487 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:12:07.575580  456487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:07.575598  456487 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:12:07.578545  456487 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:07.579127  456487 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:12:07.579159  456487 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:07.579412  456487 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:12:07.579609  456487 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:12:07.579783  456487 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:12:07.579952  456487 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa Username:docker}
	W0819 19:12:26.053369  456487 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.162:22: connect: no route to host
	W0819 19:12:26.053518  456487 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E0819 19:12:26.053541  456487 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	I0819 19:12:26.053562  456487 status.go:257] ha-163902-m02 status: &{Name:ha-163902-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 19:12:26.053589  456487 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	I0819 19:12:26.053601  456487 status.go:255] checking status of ha-163902-m03 ...
	I0819 19:12:26.053922  456487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:26.053984  456487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:26.069619  456487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38817
	I0819 19:12:26.070062  456487 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:26.070550  456487 main.go:141] libmachine: Using API Version  1
	I0819 19:12:26.070582  456487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:26.070898  456487 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:26.071115  456487 main.go:141] libmachine: (ha-163902-m03) Calling .GetState
	I0819 19:12:26.072650  456487 status.go:330] ha-163902-m03 host status = "Running" (err=<nil>)
	I0819 19:12:26.072673  456487 host.go:66] Checking if "ha-163902-m03" exists ...
	I0819 19:12:26.073000  456487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:26.073053  456487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:26.089654  456487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42521
	I0819 19:12:26.090125  456487 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:26.090613  456487 main.go:141] libmachine: Using API Version  1
	I0819 19:12:26.090634  456487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:26.090991  456487 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:26.091198  456487 main.go:141] libmachine: (ha-163902-m03) Calling .GetIP
	I0819 19:12:26.094102  456487 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:26.094535  456487 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:12:26.094571  456487 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:26.094768  456487 host.go:66] Checking if "ha-163902-m03" exists ...
	I0819 19:12:26.095171  456487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:26.095221  456487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:26.111006  456487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46871
	I0819 19:12:26.111459  456487 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:26.111955  456487 main.go:141] libmachine: Using API Version  1
	I0819 19:12:26.111977  456487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:26.112295  456487 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:26.112508  456487 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:12:26.112671  456487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:26.112689  456487 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:12:26.116610  456487 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:26.117073  456487 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:12:26.117102  456487 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:26.117308  456487 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:12:26.117505  456487 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:12:26.117656  456487 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:12:26.117773  456487 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa Username:docker}
	I0819 19:12:26.197202  456487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:12:26.213657  456487 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:12:26.213692  456487 api_server.go:166] Checking apiserver status ...
	I0819 19:12:26.213729  456487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:26.230560  456487 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0819 19:12:26.240843  456487 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:26.240903  456487 ssh_runner.go:195] Run: ls
	I0819 19:12:26.245415  456487 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:12:26.250361  456487 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:12:26.250395  456487 status.go:422] ha-163902-m03 apiserver status = Running (err=<nil>)
	I0819 19:12:26.250407  456487 status.go:257] ha-163902-m03 status: &{Name:ha-163902-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:12:26.250428  456487 status.go:255] checking status of ha-163902-m04 ...
	I0819 19:12:26.250742  456487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:26.250777  456487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:26.266307  456487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33475
	I0819 19:12:26.266749  456487 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:26.267239  456487 main.go:141] libmachine: Using API Version  1
	I0819 19:12:26.267263  456487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:26.267661  456487 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:26.267842  456487 main.go:141] libmachine: (ha-163902-m04) Calling .GetState
	I0819 19:12:26.269633  456487 status.go:330] ha-163902-m04 host status = "Running" (err=<nil>)
	I0819 19:12:26.269651  456487 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:12:26.269926  456487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:26.269961  456487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:26.285938  456487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36927
	I0819 19:12:26.286423  456487 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:26.286941  456487 main.go:141] libmachine: Using API Version  1
	I0819 19:12:26.286969  456487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:26.287352  456487 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:26.287547  456487 main.go:141] libmachine: (ha-163902-m04) Calling .GetIP
	I0819 19:12:26.290199  456487 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:26.290641  456487 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:09:12 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:12:26.290675  456487 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:26.290832  456487 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:12:26.291128  456487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:26.291165  456487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:26.307294  456487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35291
	I0819 19:12:26.307806  456487 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:26.308278  456487 main.go:141] libmachine: Using API Version  1
	I0819 19:12:26.308298  456487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:26.308646  456487 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:26.308865  456487 main.go:141] libmachine: (ha-163902-m04) Calling .DriverName
	I0819 19:12:26.309118  456487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:26.309158  456487 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHHostname
	I0819 19:12:26.312168  456487 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:26.312656  456487 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:09:12 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:12:26.312678  456487 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:26.312897  456487 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHPort
	I0819 19:12:26.313099  456487 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHKeyPath
	I0819 19:12:26.313292  456487 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHUsername
	I0819 19:12:26.313441  456487 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m04/id_rsa Username:docker}
	I0819 19:12:26.392199  456487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:12:26.406669  456487 status.go:257] ha-163902-m04 status: &{Name:ha-163902-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-163902 -n ha-163902
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-163902 logs -n 25: (1.362080751s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-163902 cp ha-163902-m03:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4137015802/001/cp-test_ha-163902-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m03:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902:/home/docker/cp-test_ha-163902-m03_ha-163902.txt                       |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902 sudo cat                                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m03_ha-163902.txt                                 |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m03:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m02:/home/docker/cp-test_ha-163902-m03_ha-163902-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902-m02 sudo cat                                          | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m03_ha-163902-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m03:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04:/home/docker/cp-test_ha-163902-m03_ha-163902-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902-m04 sudo cat                                          | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m03_ha-163902-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-163902 cp testdata/cp-test.txt                                                | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4137015802/001/cp-test_ha-163902-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902:/home/docker/cp-test_ha-163902-m04_ha-163902.txt                       |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902 sudo cat                                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m04_ha-163902.txt                                 |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m02:/home/docker/cp-test_ha-163902-m04_ha-163902-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902-m02 sudo cat                                          | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m04_ha-163902-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m03:/home/docker/cp-test_ha-163902-m04_ha-163902-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902-m03 sudo cat                                          | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m04_ha-163902-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-163902 node stop m02 -v=7                                                     | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:05:31
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:05:31.418232  452010 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:05:31.418352  452010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:05:31.418358  452010 out.go:358] Setting ErrFile to fd 2...
	I0819 19:05:31.418362  452010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:05:31.418546  452010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:05:31.419129  452010 out.go:352] Setting JSON to false
	I0819 19:05:31.420120  452010 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10082,"bootTime":1724084249,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:05:31.420188  452010 start.go:139] virtualization: kvm guest
	I0819 19:05:31.422656  452010 out.go:177] * [ha-163902] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:05:31.424219  452010 notify.go:220] Checking for updates...
	I0819 19:05:31.424236  452010 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 19:05:31.425870  452010 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:05:31.427552  452010 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:05:31.429212  452010 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:05:31.430737  452010 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:05:31.432186  452010 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:05:31.433967  452010 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 19:05:31.471459  452010 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 19:05:31.473054  452010 start.go:297] selected driver: kvm2
	I0819 19:05:31.473074  452010 start.go:901] validating driver "kvm2" against <nil>
	I0819 19:05:31.473085  452010 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:05:31.473948  452010 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:05:31.474033  452010 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:05:31.490219  452010 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:05:31.490286  452010 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 19:05:31.490507  452010 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:05:31.490544  452010 cni.go:84] Creating CNI manager for ""
	I0819 19:05:31.490552  452010 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 19:05:31.490558  452010 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 19:05:31.490608  452010 start.go:340] cluster config:
	{Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0819 19:05:31.490706  452010 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:05:31.492897  452010 out.go:177] * Starting "ha-163902" primary control-plane node in "ha-163902" cluster
	I0819 19:05:31.494365  452010 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:05:31.494418  452010 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:05:31.494432  452010 cache.go:56] Caching tarball of preloaded images
	I0819 19:05:31.494530  452010 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:05:31.494540  452010 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 19:05:31.494829  452010 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:05:31.494853  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json: {Name:mkb31c7310cece5f6635574f2a3901077b4ca7df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:05:31.495004  452010 start.go:360] acquireMachinesLock for ha-163902: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:05:31.495032  452010 start.go:364] duration metric: took 15.004µs to acquireMachinesLock for "ha-163902"
	I0819 19:05:31.495049  452010 start.go:93] Provisioning new machine with config: &{Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:05:31.495114  452010 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 19:05:31.496975  452010 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 19:05:31.497125  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:05:31.497232  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:05:31.512292  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46807
	I0819 19:05:31.512869  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:05:31.513604  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:05:31.513630  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:05:31.514037  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:05:31.514230  452010 main.go:141] libmachine: (ha-163902) Calling .GetMachineName
	I0819 19:05:31.514445  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:05:31.514633  452010 start.go:159] libmachine.API.Create for "ha-163902" (driver="kvm2")
	I0819 19:05:31.514661  452010 client.go:168] LocalClient.Create starting
	I0819 19:05:31.514691  452010 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem
	I0819 19:05:31.514723  452010 main.go:141] libmachine: Decoding PEM data...
	I0819 19:05:31.514736  452010 main.go:141] libmachine: Parsing certificate...
	I0819 19:05:31.514784  452010 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem
	I0819 19:05:31.514803  452010 main.go:141] libmachine: Decoding PEM data...
	I0819 19:05:31.514814  452010 main.go:141] libmachine: Parsing certificate...
	I0819 19:05:31.514829  452010 main.go:141] libmachine: Running pre-create checks...
	I0819 19:05:31.514838  452010 main.go:141] libmachine: (ha-163902) Calling .PreCreateCheck
	I0819 19:05:31.515251  452010 main.go:141] libmachine: (ha-163902) Calling .GetConfigRaw
	I0819 19:05:31.515658  452010 main.go:141] libmachine: Creating machine...
	I0819 19:05:31.515672  452010 main.go:141] libmachine: (ha-163902) Calling .Create
	I0819 19:05:31.515803  452010 main.go:141] libmachine: (ha-163902) Creating KVM machine...
	I0819 19:05:31.517120  452010 main.go:141] libmachine: (ha-163902) DBG | found existing default KVM network
	I0819 19:05:31.517965  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:31.517812  452034 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201330}
	I0819 19:05:31.517985  452010 main.go:141] libmachine: (ha-163902) DBG | created network xml: 
	I0819 19:05:31.518000  452010 main.go:141] libmachine: (ha-163902) DBG | <network>
	I0819 19:05:31.518008  452010 main.go:141] libmachine: (ha-163902) DBG |   <name>mk-ha-163902</name>
	I0819 19:05:31.518016  452010 main.go:141] libmachine: (ha-163902) DBG |   <dns enable='no'/>
	I0819 19:05:31.518022  452010 main.go:141] libmachine: (ha-163902) DBG |   
	I0819 19:05:31.518031  452010 main.go:141] libmachine: (ha-163902) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 19:05:31.518039  452010 main.go:141] libmachine: (ha-163902) DBG |     <dhcp>
	I0819 19:05:31.518048  452010 main.go:141] libmachine: (ha-163902) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 19:05:31.518060  452010 main.go:141] libmachine: (ha-163902) DBG |     </dhcp>
	I0819 19:05:31.518072  452010 main.go:141] libmachine: (ha-163902) DBG |   </ip>
	I0819 19:05:31.518083  452010 main.go:141] libmachine: (ha-163902) DBG |   
	I0819 19:05:31.518126  452010 main.go:141] libmachine: (ha-163902) DBG | </network>
	I0819 19:05:31.518164  452010 main.go:141] libmachine: (ha-163902) DBG | 
	I0819 19:05:31.524005  452010 main.go:141] libmachine: (ha-163902) DBG | trying to create private KVM network mk-ha-163902 192.168.39.0/24...
	I0819 19:05:31.603853  452010 main.go:141] libmachine: (ha-163902) Setting up store path in /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902 ...
	I0819 19:05:31.603893  452010 main.go:141] libmachine: (ha-163902) DBG | private KVM network mk-ha-163902 192.168.39.0/24 created
	I0819 19:05:31.603907  452010 main.go:141] libmachine: (ha-163902) Building disk image from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 19:05:31.603928  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:31.603779  452034 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:05:31.603949  452010 main.go:141] libmachine: (ha-163902) Downloading /home/jenkins/minikube-integration/19423-430949/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 19:05:31.884760  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:31.884593  452034 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa...
	I0819 19:05:32.045420  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:32.045265  452034 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/ha-163902.rawdisk...
	I0819 19:05:32.045453  452010 main.go:141] libmachine: (ha-163902) DBG | Writing magic tar header
	I0819 19:05:32.045465  452010 main.go:141] libmachine: (ha-163902) DBG | Writing SSH key tar header
	I0819 19:05:32.045473  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:32.045407  452034 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902 ...
	I0819 19:05:32.045639  452010 main.go:141] libmachine: (ha-163902) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902 (perms=drwx------)
	I0819 19:05:32.045668  452010 main.go:141] libmachine: (ha-163902) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902
	I0819 19:05:32.045678  452010 main.go:141] libmachine: (ha-163902) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines (perms=drwxr-xr-x)
	I0819 19:05:32.045701  452010 main.go:141] libmachine: (ha-163902) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube (perms=drwxr-xr-x)
	I0819 19:05:32.045713  452010 main.go:141] libmachine: (ha-163902) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949 (perms=drwxrwxr-x)
	I0819 19:05:32.045726  452010 main.go:141] libmachine: (ha-163902) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 19:05:32.045738  452010 main.go:141] libmachine: (ha-163902) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 19:05:32.045748  452010 main.go:141] libmachine: (ha-163902) Creating domain...
	I0819 19:05:32.045766  452010 main.go:141] libmachine: (ha-163902) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines
	I0819 19:05:32.045780  452010 main.go:141] libmachine: (ha-163902) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:05:32.045793  452010 main.go:141] libmachine: (ha-163902) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949
	I0819 19:05:32.045807  452010 main.go:141] libmachine: (ha-163902) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 19:05:32.045818  452010 main.go:141] libmachine: (ha-163902) DBG | Checking permissions on dir: /home/jenkins
	I0819 19:05:32.045828  452010 main.go:141] libmachine: (ha-163902) DBG | Checking permissions on dir: /home
	I0819 19:05:32.045838  452010 main.go:141] libmachine: (ha-163902) DBG | Skipping /home - not owner
	I0819 19:05:32.047042  452010 main.go:141] libmachine: (ha-163902) define libvirt domain using xml: 
	I0819 19:05:32.047066  452010 main.go:141] libmachine: (ha-163902) <domain type='kvm'>
	I0819 19:05:32.047074  452010 main.go:141] libmachine: (ha-163902)   <name>ha-163902</name>
	I0819 19:05:32.047081  452010 main.go:141] libmachine: (ha-163902)   <memory unit='MiB'>2200</memory>
	I0819 19:05:32.047116  452010 main.go:141] libmachine: (ha-163902)   <vcpu>2</vcpu>
	I0819 19:05:32.047138  452010 main.go:141] libmachine: (ha-163902)   <features>
	I0819 19:05:32.047170  452010 main.go:141] libmachine: (ha-163902)     <acpi/>
	I0819 19:05:32.047193  452010 main.go:141] libmachine: (ha-163902)     <apic/>
	I0819 19:05:32.047207  452010 main.go:141] libmachine: (ha-163902)     <pae/>
	I0819 19:05:32.047217  452010 main.go:141] libmachine: (ha-163902)     
	I0819 19:05:32.047222  452010 main.go:141] libmachine: (ha-163902)   </features>
	I0819 19:05:32.047231  452010 main.go:141] libmachine: (ha-163902)   <cpu mode='host-passthrough'>
	I0819 19:05:32.047235  452010 main.go:141] libmachine: (ha-163902)   
	I0819 19:05:32.047245  452010 main.go:141] libmachine: (ha-163902)   </cpu>
	I0819 19:05:32.047251  452010 main.go:141] libmachine: (ha-163902)   <os>
	I0819 19:05:32.047259  452010 main.go:141] libmachine: (ha-163902)     <type>hvm</type>
	I0819 19:05:32.047272  452010 main.go:141] libmachine: (ha-163902)     <boot dev='cdrom'/>
	I0819 19:05:32.047282  452010 main.go:141] libmachine: (ha-163902)     <boot dev='hd'/>
	I0819 19:05:32.047298  452010 main.go:141] libmachine: (ha-163902)     <bootmenu enable='no'/>
	I0819 19:05:32.047314  452010 main.go:141] libmachine: (ha-163902)   </os>
	I0819 19:05:32.047327  452010 main.go:141] libmachine: (ha-163902)   <devices>
	I0819 19:05:32.047338  452010 main.go:141] libmachine: (ha-163902)     <disk type='file' device='cdrom'>
	I0819 19:05:32.047355  452010 main.go:141] libmachine: (ha-163902)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/boot2docker.iso'/>
	I0819 19:05:32.047365  452010 main.go:141] libmachine: (ha-163902)       <target dev='hdc' bus='scsi'/>
	I0819 19:05:32.047378  452010 main.go:141] libmachine: (ha-163902)       <readonly/>
	I0819 19:05:32.047389  452010 main.go:141] libmachine: (ha-163902)     </disk>
	I0819 19:05:32.047402  452010 main.go:141] libmachine: (ha-163902)     <disk type='file' device='disk'>
	I0819 19:05:32.047414  452010 main.go:141] libmachine: (ha-163902)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 19:05:32.047427  452010 main.go:141] libmachine: (ha-163902)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/ha-163902.rawdisk'/>
	I0819 19:05:32.047437  452010 main.go:141] libmachine: (ha-163902)       <target dev='hda' bus='virtio'/>
	I0819 19:05:32.047445  452010 main.go:141] libmachine: (ha-163902)     </disk>
	I0819 19:05:32.047455  452010 main.go:141] libmachine: (ha-163902)     <interface type='network'>
	I0819 19:05:32.047477  452010 main.go:141] libmachine: (ha-163902)       <source network='mk-ha-163902'/>
	I0819 19:05:32.047489  452010 main.go:141] libmachine: (ha-163902)       <model type='virtio'/>
	I0819 19:05:32.047496  452010 main.go:141] libmachine: (ha-163902)     </interface>
	I0819 19:05:32.047501  452010 main.go:141] libmachine: (ha-163902)     <interface type='network'>
	I0819 19:05:32.047509  452010 main.go:141] libmachine: (ha-163902)       <source network='default'/>
	I0819 19:05:32.047513  452010 main.go:141] libmachine: (ha-163902)       <model type='virtio'/>
	I0819 19:05:32.047521  452010 main.go:141] libmachine: (ha-163902)     </interface>
	I0819 19:05:32.047525  452010 main.go:141] libmachine: (ha-163902)     <serial type='pty'>
	I0819 19:05:32.047530  452010 main.go:141] libmachine: (ha-163902)       <target port='0'/>
	I0819 19:05:32.047537  452010 main.go:141] libmachine: (ha-163902)     </serial>
	I0819 19:05:32.047542  452010 main.go:141] libmachine: (ha-163902)     <console type='pty'>
	I0819 19:05:32.047551  452010 main.go:141] libmachine: (ha-163902)       <target type='serial' port='0'/>
	I0819 19:05:32.047564  452010 main.go:141] libmachine: (ha-163902)     </console>
	I0819 19:05:32.047575  452010 main.go:141] libmachine: (ha-163902)     <rng model='virtio'>
	I0819 19:05:32.047592  452010 main.go:141] libmachine: (ha-163902)       <backend model='random'>/dev/random</backend>
	I0819 19:05:32.047605  452010 main.go:141] libmachine: (ha-163902)     </rng>
	I0819 19:05:32.047613  452010 main.go:141] libmachine: (ha-163902)     
	I0819 19:05:32.047630  452010 main.go:141] libmachine: (ha-163902)     
	I0819 19:05:32.047642  452010 main.go:141] libmachine: (ha-163902)   </devices>
	I0819 19:05:32.047652  452010 main.go:141] libmachine: (ha-163902) </domain>
	I0819 19:05:32.047663  452010 main.go:141] libmachine: (ha-163902) 
	I0819 19:05:32.052511  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:6b:f0:f7 in network default
	I0819 19:05:32.053337  452010 main.go:141] libmachine: (ha-163902) Ensuring networks are active...
	I0819 19:05:32.053362  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:32.054093  452010 main.go:141] libmachine: (ha-163902) Ensuring network default is active
	I0819 19:05:32.054399  452010 main.go:141] libmachine: (ha-163902) Ensuring network mk-ha-163902 is active
	I0819 19:05:32.054895  452010 main.go:141] libmachine: (ha-163902) Getting domain xml...
	I0819 19:05:32.055541  452010 main.go:141] libmachine: (ha-163902) Creating domain...
	I0819 19:05:33.295790  452010 main.go:141] libmachine: (ha-163902) Waiting to get IP...
	I0819 19:05:33.296639  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:33.297086  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:33.297153  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:33.297073  452034 retry.go:31] will retry after 235.373593ms: waiting for machine to come up
	I0819 19:05:33.534776  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:33.535248  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:33.535276  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:33.535203  452034 retry.go:31] will retry after 372.031549ms: waiting for machine to come up
	I0819 19:05:33.908862  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:33.909298  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:33.909329  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:33.909258  452034 retry.go:31] will retry after 461.573677ms: waiting for machine to come up
	I0819 19:05:34.373270  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:34.373854  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:34.373878  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:34.373800  452034 retry.go:31] will retry after 374.272193ms: waiting for machine to come up
	I0819 19:05:34.749561  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:34.750084  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:34.750118  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:34.750021  452034 retry.go:31] will retry after 678.038494ms: waiting for machine to come up
	I0819 19:05:35.429875  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:35.430266  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:35.430297  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:35.430221  452034 retry.go:31] will retry after 797.074334ms: waiting for machine to come up
	I0819 19:05:36.229400  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:36.229868  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:36.229957  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:36.229817  452034 retry.go:31] will retry after 1.092014853s: waiting for machine to come up
	I0819 19:05:37.323998  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:37.324515  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:37.324545  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:37.324450  452034 retry.go:31] will retry after 1.272539267s: waiting for machine to come up
	I0819 19:05:38.599242  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:38.599875  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:38.599904  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:38.599824  452034 retry.go:31] will retry after 1.464855471s: waiting for machine to come up
	I0819 19:05:40.066143  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:40.066660  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:40.066688  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:40.066595  452034 retry.go:31] will retry after 1.829451481s: waiting for machine to come up
	I0819 19:05:41.897944  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:41.898352  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:41.898384  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:41.898295  452034 retry.go:31] will retry after 2.819732082s: waiting for machine to come up
	I0819 19:05:44.719420  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:44.719862  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:44.719886  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:44.719819  452034 retry.go:31] will retry after 2.733084141s: waiting for machine to come up
	I0819 19:05:47.454272  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:47.454861  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:47.454890  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:47.454791  452034 retry.go:31] will retry after 3.235083135s: waiting for machine to come up
	I0819 19:05:50.693380  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:50.693783  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:50.693816  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:50.693726  452034 retry.go:31] will retry after 4.687824547s: waiting for machine to come up
	I0819 19:05:55.385601  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.386026  452010 main.go:141] libmachine: (ha-163902) Found IP for machine: 192.168.39.227
	I0819 19:05:55.386051  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has current primary IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.386058  452010 main.go:141] libmachine: (ha-163902) Reserving static IP address...
	I0819 19:05:55.386385  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find host DHCP lease matching {name: "ha-163902", mac: "52:54:00:57:94:b4", ip: "192.168.39.227"} in network mk-ha-163902
	I0819 19:05:55.470803  452010 main.go:141] libmachine: (ha-163902) DBG | Getting to WaitForSSH function...
	I0819 19:05:55.470832  452010 main.go:141] libmachine: (ha-163902) Reserved static IP address: 192.168.39.227
	I0819 19:05:55.470842  452010 main.go:141] libmachine: (ha-163902) Waiting for SSH to be available...
	I0819 19:05:55.473458  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.473843  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:55.473867  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.474095  452010 main.go:141] libmachine: (ha-163902) DBG | Using SSH client type: external
	I0819 19:05:55.474116  452010 main.go:141] libmachine: (ha-163902) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa (-rw-------)
	I0819 19:05:55.474172  452010 main.go:141] libmachine: (ha-163902) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:05:55.474194  452010 main.go:141] libmachine: (ha-163902) DBG | About to run SSH command:
	I0819 19:05:55.474208  452010 main.go:141] libmachine: (ha-163902) DBG | exit 0
	I0819 19:05:55.597283  452010 main.go:141] libmachine: (ha-163902) DBG | SSH cmd err, output: <nil>: 
	I0819 19:05:55.597559  452010 main.go:141] libmachine: (ha-163902) KVM machine creation complete!
	I0819 19:05:55.597983  452010 main.go:141] libmachine: (ha-163902) Calling .GetConfigRaw
	I0819 19:05:55.598555  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:05:55.598774  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:05:55.598943  452010 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 19:05:55.598968  452010 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:05:55.600319  452010 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 19:05:55.600346  452010 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 19:05:55.600352  452010 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 19:05:55.600358  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:55.602646  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.603087  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:55.603119  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.603245  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:55.603449  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:55.603621  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:55.603759  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:55.603945  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:05:55.604156  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:05:55.604168  452010 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 19:05:55.704470  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:05:55.704499  452010 main.go:141] libmachine: Detecting the provisioner...
	I0819 19:05:55.704510  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:55.707585  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.708003  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:55.708018  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.708210  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:55.708434  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:55.708619  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:55.708789  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:55.708997  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:05:55.709234  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:05:55.709250  452010 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 19:05:55.814037  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 19:05:55.814129  452010 main.go:141] libmachine: found compatible host: buildroot
	I0819 19:05:55.814143  452010 main.go:141] libmachine: Provisioning with buildroot...
	I0819 19:05:55.814151  452010 main.go:141] libmachine: (ha-163902) Calling .GetMachineName
	I0819 19:05:55.814414  452010 buildroot.go:166] provisioning hostname "ha-163902"
	I0819 19:05:55.814443  452010 main.go:141] libmachine: (ha-163902) Calling .GetMachineName
	I0819 19:05:55.814730  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:55.817631  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.817991  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:55.818014  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.818198  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:55.818407  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:55.818543  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:55.818666  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:55.818844  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:05:55.819030  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:05:55.819042  452010 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-163902 && echo "ha-163902" | sudo tee /etc/hostname
	I0819 19:05:55.936372  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-163902
	
	I0819 19:05:55.936408  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:55.939125  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.939548  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:55.939576  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.939755  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:55.939961  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:55.940154  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:55.940278  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:55.940417  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:05:55.940629  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:05:55.940652  452010 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-163902' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-163902/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-163902' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:05:56.049627  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:05:56.049665  452010 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 19:05:56.049694  452010 buildroot.go:174] setting up certificates
	I0819 19:05:56.049709  452010 provision.go:84] configureAuth start
	I0819 19:05:56.049724  452010 main.go:141] libmachine: (ha-163902) Calling .GetMachineName
	I0819 19:05:56.050048  452010 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:05:56.052735  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.053044  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.053078  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.053336  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:56.055736  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.056089  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.056117  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.056279  452010 provision.go:143] copyHostCerts
	I0819 19:05:56.056312  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:05:56.056346  452010 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 19:05:56.056364  452010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:05:56.056431  452010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 19:05:56.056563  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:05:56.056581  452010 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 19:05:56.056586  452010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:05:56.056612  452010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 19:05:56.056656  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:05:56.056675  452010 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 19:05:56.056678  452010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:05:56.056698  452010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 19:05:56.056741  452010 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.ha-163902 san=[127.0.0.1 192.168.39.227 ha-163902 localhost minikube]
	I0819 19:05:56.321863  452010 provision.go:177] copyRemoteCerts
	I0819 19:05:56.321953  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:05:56.321981  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:56.325000  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.325450  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.325486  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.325716  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:56.325967  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:56.326145  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:56.326344  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:05:56.407242  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 19:05:56.407315  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:05:56.432002  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 19:05:56.432072  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:05:56.455580  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 19:05:56.455641  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0819 19:05:56.479145  452010 provision.go:87] duration metric: took 429.421483ms to configureAuth
	I0819 19:05:56.479183  452010 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:05:56.479390  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:05:56.479512  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:56.482475  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.482826  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.482857  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.483040  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:56.483280  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:56.483461  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:56.483617  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:56.483794  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:05:56.483982  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:05:56.484004  452010 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:05:56.736096  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:05:56.736129  452010 main.go:141] libmachine: Checking connection to Docker...
	I0819 19:05:56.736141  452010 main.go:141] libmachine: (ha-163902) Calling .GetURL
	I0819 19:05:56.737376  452010 main.go:141] libmachine: (ha-163902) DBG | Using libvirt version 6000000
	I0819 19:05:56.739791  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.740149  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.740176  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.740410  452010 main.go:141] libmachine: Docker is up and running!
	I0819 19:05:56.740428  452010 main.go:141] libmachine: Reticulating splines...
	I0819 19:05:56.740437  452010 client.go:171] duration metric: took 25.225767843s to LocalClient.Create
	I0819 19:05:56.740467  452010 start.go:167] duration metric: took 25.225834543s to libmachine.API.Create "ha-163902"
	I0819 19:05:56.740480  452010 start.go:293] postStartSetup for "ha-163902" (driver="kvm2")
	I0819 19:05:56.740493  452010 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:05:56.740508  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:05:56.740744  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:05:56.740769  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:56.743112  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.743433  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.743462  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.743701  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:56.743894  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:56.744047  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:56.744157  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:05:56.823418  452010 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:05:56.827918  452010 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:05:56.827953  452010 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 19:05:56.828030  452010 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 19:05:56.828115  452010 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 19:05:56.828127  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /etc/ssl/certs/4381592.pem
	I0819 19:05:56.828222  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:05:56.837576  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:05:56.861282  452010 start.go:296] duration metric: took 120.784925ms for postStartSetup
	I0819 19:05:56.861343  452010 main.go:141] libmachine: (ha-163902) Calling .GetConfigRaw
	I0819 19:05:56.862005  452010 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:05:56.864830  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.865251  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.865277  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.865556  452010 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:05:56.865749  452010 start.go:128] duration metric: took 25.370624874s to createHost
	I0819 19:05:56.865772  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:56.868179  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.868523  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.868556  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.868743  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:56.868961  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:56.869123  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:56.869266  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:56.869432  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:05:56.869640  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:05:56.869659  452010 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:05:56.969674  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094356.940579657
	
	I0819 19:05:56.969700  452010 fix.go:216] guest clock: 1724094356.940579657
	I0819 19:05:56.969709  452010 fix.go:229] Guest: 2024-08-19 19:05:56.940579657 +0000 UTC Remote: 2024-08-19 19:05:56.865761238 +0000 UTC m=+25.484677957 (delta=74.818419ms)
	I0819 19:05:56.969731  452010 fix.go:200] guest clock delta is within tolerance: 74.818419ms
	I0819 19:05:56.969737  452010 start.go:83] releasing machines lock for "ha-163902", held for 25.474696847s
	I0819 19:05:56.969759  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:05:56.970089  452010 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:05:56.972736  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.973089  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.973119  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.973315  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:05:56.973836  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:05:56.974016  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:05:56.974079  452010 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:05:56.974130  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:56.974231  452010 ssh_runner.go:195] Run: cat /version.json
	I0819 19:05:56.974252  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:56.976706  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.976836  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.977227  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.977255  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.977376  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.977406  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.977410  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:56.977569  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:56.977645  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:56.977754  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:56.977900  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:56.977964  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:56.978038  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:05:56.978099  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:05:57.074642  452010 ssh_runner.go:195] Run: systemctl --version
	I0819 19:05:57.080599  452010 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:05:57.240792  452010 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:05:57.246582  452010 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:05:57.246671  452010 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:05:57.262756  452010 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:05:57.262810  452010 start.go:495] detecting cgroup driver to use...
	I0819 19:05:57.262888  452010 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:05:57.279672  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:05:57.294164  452010 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:05:57.294248  452010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:05:57.308353  452010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:05:57.322390  452010 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:05:57.436990  452010 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:05:57.602038  452010 docker.go:233] disabling docker service ...
	I0819 19:05:57.602118  452010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:05:57.616232  452010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:05:57.629871  452010 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:05:57.755386  452010 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:05:57.872386  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:05:57.886357  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:05:57.904738  452010 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:05:57.904798  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:05:57.915183  452010 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:05:57.915262  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:05:57.925467  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:05:57.935604  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:05:57.946343  452010 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:05:57.957039  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:05:57.967274  452010 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:05:57.984485  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:05:57.994896  452010 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:05:58.004202  452010 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:05:58.004275  452010 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:05:58.016953  452010 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:05:58.026601  452010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:05:58.143400  452010 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:05:58.277062  452010 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:05:58.277167  452010 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:05:58.281828  452010 start.go:563] Will wait 60s for crictl version
	I0819 19:05:58.281896  452010 ssh_runner.go:195] Run: which crictl
	I0819 19:05:58.285555  452010 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:05:58.321545  452010 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:05:58.321626  452010 ssh_runner.go:195] Run: crio --version
	I0819 19:05:58.348350  452010 ssh_runner.go:195] Run: crio --version
	I0819 19:05:58.378204  452010 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:05:58.379348  452010 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:05:58.381908  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:58.382305  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:58.382329  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:58.382563  452010 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 19:05:58.386764  452010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:05:58.399156  452010 kubeadm.go:883] updating cluster {Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:05:58.399272  452010 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:05:58.399332  452010 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:05:58.431678  452010 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:05:58.431751  452010 ssh_runner.go:195] Run: which lz4
	I0819 19:05:58.435341  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0819 19:05:58.435440  452010 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:05:58.439403  452010 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:05:58.439438  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 19:05:59.728799  452010 crio.go:462] duration metric: took 1.29338158s to copy over tarball
	I0819 19:05:59.728897  452010 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:06:01.890799  452010 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.161871811s)
	I0819 19:06:01.890832  452010 crio.go:469] duration metric: took 2.16199361s to extract the tarball
	I0819 19:06:01.890843  452010 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:06:01.929394  452010 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:06:01.976632  452010 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:06:01.976655  452010 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:06:01.976664  452010 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.31.0 crio true true} ...
	I0819 19:06:01.976785  452010 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-163902 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:06:01.976874  452010 ssh_runner.go:195] Run: crio config
	I0819 19:06:02.031929  452010 cni.go:84] Creating CNI manager for ""
	I0819 19:06:02.031959  452010 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 19:06:02.031971  452010 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:06:02.032002  452010 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-163902 NodeName:ha-163902 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:06:02.032186  452010 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-163902"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:06:02.032220  452010 kube-vip.go:115] generating kube-vip config ...
	I0819 19:06:02.032296  452010 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 19:06:02.047887  452010 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 19:06:02.048023  452010 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0819 19:06:02.048094  452010 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:06:02.057959  452010 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:06:02.058049  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 19:06:02.067968  452010 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0819 19:06:02.084960  452010 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:06:02.101596  452010 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0819 19:06:02.118401  452010 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0819 19:06:02.135157  452010 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 19:06:02.139038  452010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:06:02.151277  452010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:06:02.287982  452010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:06:02.305693  452010 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902 for IP: 192.168.39.227
	I0819 19:06:02.305726  452010 certs.go:194] generating shared ca certs ...
	I0819 19:06:02.305746  452010 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:02.305908  452010 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 19:06:02.305988  452010 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 19:06:02.306003  452010 certs.go:256] generating profile certs ...
	I0819 19:06:02.306073  452010 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.key
	I0819 19:06:02.306104  452010 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.crt with IP's: []
	I0819 19:06:02.433694  452010 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.crt ...
	I0819 19:06:02.433730  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.crt: {Name:mk8bfdedc79175fd65d664bd895dabaee1f5368d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:02.433947  452010 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.key ...
	I0819 19:06:02.433970  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.key: {Name:mk9c72d09ffba4dd19fb35a4717d614fa3a0d869 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:02.434070  452010 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.c22e295a
	I0819 19:06:02.434086  452010 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.c22e295a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227 192.168.39.254]
	I0819 19:06:02.490434  452010 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.c22e295a ...
	I0819 19:06:02.490465  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.c22e295a: {Name:mkb54a1fb887f906a05ab935bff349329bc82beb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:02.490630  452010 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.c22e295a ...
	I0819 19:06:02.490651  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.c22e295a: {Name:mka31659a1a74ea6e771829c2dff31e6afb34975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:02.490719  452010 certs.go:381] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.c22e295a -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt
	I0819 19:06:02.490797  452010 certs.go:385] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.c22e295a -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key
	I0819 19:06:02.490850  452010 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key
	I0819 19:06:02.490865  452010 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt with IP's: []
	I0819 19:06:02.628360  452010 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt ...
	I0819 19:06:02.628394  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt: {Name:mkc9cf581f37f8a743e563825e7e50273a0a4f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:02.628561  452010 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key ...
	I0819 19:06:02.628571  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key: {Name:mke69930ded311f6c6a36cae8ec6b8af054e66cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:02.628639  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 19:06:02.628655  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 19:06:02.628666  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 19:06:02.628678  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 19:06:02.628691  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 19:06:02.628701  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 19:06:02.628713  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 19:06:02.628727  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 19:06:02.628774  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 19:06:02.628812  452010 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 19:06:02.628823  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 19:06:02.628846  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:06:02.628869  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:06:02.628891  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 19:06:02.628928  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:06:02.628952  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem -> /usr/share/ca-certificates/438159.pem
	I0819 19:06:02.628967  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /usr/share/ca-certificates/4381592.pem
	I0819 19:06:02.628979  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:06:02.629578  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:06:02.654309  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:06:02.678599  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:06:02.703589  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 19:06:02.728503  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 19:06:02.753019  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:06:02.777539  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:06:02.802262  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:06:02.829634  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 19:06:02.856657  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 19:06:02.883381  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:06:02.907290  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:06:02.924124  452010 ssh_runner.go:195] Run: openssl version
	I0819 19:06:02.929964  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 19:06:02.940968  452010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 19:06:02.945479  452010 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 19:06:02.945560  452010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 19:06:02.951359  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 19:06:02.962047  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 19:06:02.973287  452010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 19:06:02.977977  452010 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 19:06:02.978049  452010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 19:06:02.983944  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:06:02.995228  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:06:03.006949  452010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:06:03.011870  452010 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:06:03.011955  452010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:06:03.017883  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:06:03.029106  452010 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:06:03.033337  452010 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 19:06:03.033404  452010 kubeadm.go:392] StartCluster: {Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:06:03.033495  452010 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:06:03.033577  452010 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:06:03.088306  452010 cri.go:89] found id: ""
	I0819 19:06:03.088379  452010 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:06:03.100633  452010 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:06:03.114308  452010 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:06:03.124116  452010 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:06:03.124139  452010 kubeadm.go:157] found existing configuration files:
	
	I0819 19:06:03.124185  452010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:06:03.133765  452010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:06:03.133877  452010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:06:03.143684  452010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:06:03.152949  452010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:06:03.153013  452010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:06:03.162852  452010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:06:03.172125  452010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:06:03.172206  452010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:06:03.182018  452010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:06:03.191319  452010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:06:03.191392  452010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:06:03.201176  452010 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:06:03.299118  452010 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 19:06:03.299249  452010 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:06:03.407542  452010 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:06:03.407664  452010 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:06:03.407777  452010 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 19:06:03.422769  452010 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:06:03.483698  452010 out.go:235]   - Generating certificates and keys ...
	I0819 19:06:03.483806  452010 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:06:03.483931  452010 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:06:03.553846  452010 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 19:06:03.736844  452010 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 19:06:03.949345  452010 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 19:06:04.058381  452010 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 19:06:04.276348  452010 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 19:06:04.276498  452010 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-163902 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0819 19:06:04.358230  452010 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 19:06:04.358465  452010 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-163902 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0819 19:06:04.658298  452010 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 19:06:04.771768  452010 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 19:06:05.013848  452010 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 19:06:05.014067  452010 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:06:05.101434  452010 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:06:05.147075  452010 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 19:06:05.335609  452010 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:06:05.577326  452010 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:06:05.782050  452010 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:06:05.782720  452010 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:06:05.786237  452010 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:06:05.857880  452010 out.go:235]   - Booting up control plane ...
	I0819 19:06:05.858076  452010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:06:05.858171  452010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:06:05.858294  452010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:06:05.858447  452010 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:06:05.858596  452010 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:06:05.858664  452010 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:06:05.959353  452010 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 19:06:05.959485  452010 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 19:06:06.964584  452010 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005758603s
	I0819 19:06:06.964714  452010 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 19:06:12.643972  452010 kubeadm.go:310] [api-check] The API server is healthy after 5.680423764s
	I0819 19:06:12.655198  452010 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 19:06:12.674235  452010 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 19:06:12.706731  452010 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 19:06:12.706972  452010 kubeadm.go:310] [mark-control-plane] Marking the node ha-163902 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 19:06:12.719625  452010 kubeadm.go:310] [bootstrap-token] Using token: ydvj8p.1o1g0g4n7744ocvt
	I0819 19:06:12.720986  452010 out.go:235]   - Configuring RBAC rules ...
	I0819 19:06:12.721175  452010 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 19:06:12.728254  452010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 19:06:12.737165  452010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 19:06:12.748696  452010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 19:06:12.753193  452010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 19:06:12.757577  452010 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 19:06:13.049055  452010 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 19:06:13.490560  452010 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 19:06:14.050301  452010 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 19:06:14.052795  452010 kubeadm.go:310] 
	I0819 19:06:14.052880  452010 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 19:06:14.052897  452010 kubeadm.go:310] 
	I0819 19:06:14.052996  452010 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 19:06:14.053006  452010 kubeadm.go:310] 
	I0819 19:06:14.053039  452010 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 19:06:14.053161  452010 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 19:06:14.053238  452010 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 19:06:14.053248  452010 kubeadm.go:310] 
	I0819 19:06:14.053326  452010 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 19:06:14.053338  452010 kubeadm.go:310] 
	I0819 19:06:14.053393  452010 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 19:06:14.053407  452010 kubeadm.go:310] 
	I0819 19:06:14.053480  452010 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 19:06:14.053583  452010 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 19:06:14.053646  452010 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 19:06:14.053652  452010 kubeadm.go:310] 
	I0819 19:06:14.053730  452010 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 19:06:14.053804  452010 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 19:06:14.053810  452010 kubeadm.go:310] 
	I0819 19:06:14.053883  452010 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ydvj8p.1o1g0g4n7744ocvt \
	I0819 19:06:14.053968  452010 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f \
	I0819 19:06:14.053990  452010 kubeadm.go:310] 	--control-plane 
	I0819 19:06:14.053996  452010 kubeadm.go:310] 
	I0819 19:06:14.054071  452010 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 19:06:14.054077  452010 kubeadm.go:310] 
	I0819 19:06:14.054147  452010 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ydvj8p.1o1g0g4n7744ocvt \
	I0819 19:06:14.054243  452010 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f 
	I0819 19:06:14.055816  452010 kubeadm.go:310] W0819 19:06:03.271368     848 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:06:14.056128  452010 kubeadm.go:310] W0819 19:06:03.272125     848 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:06:14.056225  452010 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:06:14.056250  452010 cni.go:84] Creating CNI manager for ""
	I0819 19:06:14.056259  452010 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 19:06:14.057946  452010 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 19:06:14.059253  452010 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 19:06:14.064807  452010 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 19:06:14.064830  452010 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 19:06:14.086444  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 19:06:14.506238  452010 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:06:14.506388  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:06:14.506381  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-163902 minikube.k8s.io/updated_at=2024_08_19T19_06_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8 minikube.k8s.io/name=ha-163902 minikube.k8s.io/primary=true
	I0819 19:06:14.727144  452010 ops.go:34] apiserver oom_adj: -16
	I0819 19:06:14.727339  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:06:15.228325  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:06:15.727767  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:06:16.228014  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:06:16.727822  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:06:17.227601  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:06:17.727580  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:06:17.850730  452010 kubeadm.go:1113] duration metric: took 3.344418538s to wait for elevateKubeSystemPrivileges
	I0819 19:06:17.850766  452010 kubeadm.go:394] duration metric: took 14.817365401s to StartCluster
	I0819 19:06:17.850791  452010 settings.go:142] acquiring lock: {Name:mkc71def6be9966e5f008a22161bf3ed26f482b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:17.850881  452010 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:06:17.851520  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/kubeconfig: {Name:mk1ae3aa741bb2460fc0d48f80867b991b4a0677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:17.851775  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 19:06:17.851803  452010 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:06:17.851867  452010 addons.go:69] Setting storage-provisioner=true in profile "ha-163902"
	I0819 19:06:17.851769  452010 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:06:17.851899  452010 addons.go:234] Setting addon storage-provisioner=true in "ha-163902"
	I0819 19:06:17.851910  452010 start.go:241] waiting for startup goroutines ...
	I0819 19:06:17.851921  452010 addons.go:69] Setting default-storageclass=true in profile "ha-163902"
	I0819 19:06:17.851928  452010 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:06:17.851957  452010 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-163902"
	I0819 19:06:17.851957  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:06:17.852317  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:06:17.852347  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:06:17.852660  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:06:17.852801  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:06:17.868850  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33535
	I0819 19:06:17.869366  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:06:17.869959  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:06:17.869987  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:06:17.870339  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:06:17.870826  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:06:17.870851  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:06:17.874949  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44305
	I0819 19:06:17.875430  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:06:17.875975  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:06:17.876002  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:06:17.876402  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:06:17.876641  452010 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:06:17.879064  452010 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:06:17.879313  452010 kapi.go:59] client config for ha-163902: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.key", CAFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 19:06:17.879823  452010 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 19:06:17.880007  452010 addons.go:234] Setting addon default-storageclass=true in "ha-163902"
	I0819 19:06:17.880044  452010 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:06:17.880336  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:06:17.880377  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:06:17.888108  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38721
	I0819 19:06:17.888682  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:06:17.889289  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:06:17.889315  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:06:17.889785  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:06:17.890009  452010 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:06:17.892185  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:06:17.894187  452010 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:06:17.895505  452010 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:06:17.895532  452010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:06:17.895558  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:06:17.898400  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46849
	I0819 19:06:17.898852  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:06:17.899075  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:06:17.899443  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:06:17.899459  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:06:17.899525  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:06:17.899543  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:06:17.899765  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:06:17.899881  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:06:17.899946  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:06:17.900078  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:06:17.900196  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:06:17.900478  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:06:17.900525  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:06:17.916366  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42309
	I0819 19:06:17.916867  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:06:17.917444  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:06:17.917473  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:06:17.917848  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:06:17.918070  452010 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:06:17.919771  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:06:17.920045  452010 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:06:17.920066  452010 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:06:17.920087  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:06:17.923139  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:06:17.923619  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:06:17.923650  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:06:17.923833  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:06:17.924044  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:06:17.924212  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:06:17.924373  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:06:18.071538  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 19:06:18.076040  452010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:06:18.089047  452010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:06:18.600380  452010 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 19:06:18.842364  452010 main.go:141] libmachine: Making call to close driver server
	I0819 19:06:18.842392  452010 main.go:141] libmachine: (ha-163902) Calling .Close
	I0819 19:06:18.842435  452010 main.go:141] libmachine: Making call to close driver server
	I0819 19:06:18.842454  452010 main.go:141] libmachine: (ha-163902) Calling .Close
	I0819 19:06:18.842721  452010 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:06:18.842741  452010 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:06:18.842758  452010 main.go:141] libmachine: Making call to close driver server
	I0819 19:06:18.842768  452010 main.go:141] libmachine: (ha-163902) Calling .Close
	I0819 19:06:18.842825  452010 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:06:18.842844  452010 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:06:18.842853  452010 main.go:141] libmachine: Making call to close driver server
	I0819 19:06:18.842861  452010 main.go:141] libmachine: (ha-163902) Calling .Close
	I0819 19:06:18.842829  452010 main.go:141] libmachine: (ha-163902) DBG | Closing plugin on server side
	I0819 19:06:18.842992  452010 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:06:18.843008  452010 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:06:18.843066  452010 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 19:06:18.843083  452010 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 19:06:18.843176  452010 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0819 19:06:18.843183  452010 round_trippers.go:469] Request Headers:
	I0819 19:06:18.843194  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:06:18.843199  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:06:18.843453  452010 main.go:141] libmachine: (ha-163902) DBG | Closing plugin on server side
	I0819 19:06:18.843492  452010 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:06:18.843507  452010 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:06:18.855500  452010 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0819 19:06:18.856225  452010 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0819 19:06:18.856244  452010 round_trippers.go:469] Request Headers:
	I0819 19:06:18.856255  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:06:18.856264  452010 round_trippers.go:473]     Content-Type: application/json
	I0819 19:06:18.856268  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:06:18.859394  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:06:18.859620  452010 main.go:141] libmachine: Making call to close driver server
	I0819 19:06:18.859638  452010 main.go:141] libmachine: (ha-163902) Calling .Close
	I0819 19:06:18.859971  452010 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:06:18.859990  452010 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:06:18.862354  452010 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0819 19:06:18.863459  452010 addons.go:510] duration metric: took 1.011661335s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0819 19:06:18.863502  452010 start.go:246] waiting for cluster config update ...
	I0819 19:06:18.863519  452010 start.go:255] writing updated cluster config ...
	I0819 19:06:18.865072  452010 out.go:201] 
	I0819 19:06:18.866489  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:06:18.866562  452010 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:06:18.868097  452010 out.go:177] * Starting "ha-163902-m02" control-plane node in "ha-163902" cluster
	I0819 19:06:18.869172  452010 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:06:18.869199  452010 cache.go:56] Caching tarball of preloaded images
	I0819 19:06:18.869292  452010 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:06:18.869303  452010 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 19:06:18.869365  452010 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:06:18.869546  452010 start.go:360] acquireMachinesLock for ha-163902-m02: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:06:18.869588  452010 start.go:364] duration metric: took 22.151µs to acquireMachinesLock for "ha-163902-m02"
	I0819 19:06:18.869607  452010 start.go:93] Provisioning new machine with config: &{Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:06:18.869680  452010 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0819 19:06:18.871112  452010 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 19:06:18.871199  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:06:18.871224  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:06:18.886168  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42023
	I0819 19:06:18.886614  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:06:18.887144  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:06:18.887167  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:06:18.887473  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:06:18.887703  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetMachineName
	I0819 19:06:18.887860  452010 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:06:18.888028  452010 start.go:159] libmachine.API.Create for "ha-163902" (driver="kvm2")
	I0819 19:06:18.888052  452010 client.go:168] LocalClient.Create starting
	I0819 19:06:18.888094  452010 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem
	I0819 19:06:18.888140  452010 main.go:141] libmachine: Decoding PEM data...
	I0819 19:06:18.888162  452010 main.go:141] libmachine: Parsing certificate...
	I0819 19:06:18.888231  452010 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem
	I0819 19:06:18.888260  452010 main.go:141] libmachine: Decoding PEM data...
	I0819 19:06:18.888276  452010 main.go:141] libmachine: Parsing certificate...
	I0819 19:06:18.888302  452010 main.go:141] libmachine: Running pre-create checks...
	I0819 19:06:18.888313  452010 main.go:141] libmachine: (ha-163902-m02) Calling .PreCreateCheck
	I0819 19:06:18.888470  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetConfigRaw
	I0819 19:06:18.888848  452010 main.go:141] libmachine: Creating machine...
	I0819 19:06:18.888862  452010 main.go:141] libmachine: (ha-163902-m02) Calling .Create
	I0819 19:06:18.889007  452010 main.go:141] libmachine: (ha-163902-m02) Creating KVM machine...
	I0819 19:06:18.890208  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found existing default KVM network
	I0819 19:06:18.890322  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found existing private KVM network mk-ha-163902
	I0819 19:06:18.890448  452010 main.go:141] libmachine: (ha-163902-m02) Setting up store path in /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02 ...
	I0819 19:06:18.890478  452010 main.go:141] libmachine: (ha-163902-m02) Building disk image from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 19:06:18.890498  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:18.890425  452374 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:06:18.890610  452010 main.go:141] libmachine: (ha-163902-m02) Downloading /home/jenkins/minikube-integration/19423-430949/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 19:06:19.144676  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:19.144527  452374 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa...
	I0819 19:06:19.231508  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:19.231334  452374 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/ha-163902-m02.rawdisk...
	I0819 19:06:19.231542  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Writing magic tar header
	I0819 19:06:19.231553  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Writing SSH key tar header
	I0819 19:06:19.231563  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:19.231455  452374 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02 ...
	I0819 19:06:19.231578  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02
	I0819 19:06:19.231617  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines
	I0819 19:06:19.231630  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:06:19.231648  452010 main.go:141] libmachine: (ha-163902-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02 (perms=drwx------)
	I0819 19:06:19.231661  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949
	I0819 19:06:19.231675  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 19:06:19.231687  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Checking permissions on dir: /home/jenkins
	I0819 19:06:19.231697  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Checking permissions on dir: /home
	I0819 19:06:19.231717  452010 main.go:141] libmachine: (ha-163902-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines (perms=drwxr-xr-x)
	I0819 19:06:19.231731  452010 main.go:141] libmachine: (ha-163902-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube (perms=drwxr-xr-x)
	I0819 19:06:19.231742  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Skipping /home - not owner
	I0819 19:06:19.231762  452010 main.go:141] libmachine: (ha-163902-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949 (perms=drwxrwxr-x)
	I0819 19:06:19.231779  452010 main.go:141] libmachine: (ha-163902-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 19:06:19.231798  452010 main.go:141] libmachine: (ha-163902-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 19:06:19.231809  452010 main.go:141] libmachine: (ha-163902-m02) Creating domain...
	I0819 19:06:19.232755  452010 main.go:141] libmachine: (ha-163902-m02) define libvirt domain using xml: 
	I0819 19:06:19.232780  452010 main.go:141] libmachine: (ha-163902-m02) <domain type='kvm'>
	I0819 19:06:19.232790  452010 main.go:141] libmachine: (ha-163902-m02)   <name>ha-163902-m02</name>
	I0819 19:06:19.232801  452010 main.go:141] libmachine: (ha-163902-m02)   <memory unit='MiB'>2200</memory>
	I0819 19:06:19.232809  452010 main.go:141] libmachine: (ha-163902-m02)   <vcpu>2</vcpu>
	I0819 19:06:19.232815  452010 main.go:141] libmachine: (ha-163902-m02)   <features>
	I0819 19:06:19.232824  452010 main.go:141] libmachine: (ha-163902-m02)     <acpi/>
	I0819 19:06:19.232831  452010 main.go:141] libmachine: (ha-163902-m02)     <apic/>
	I0819 19:06:19.232839  452010 main.go:141] libmachine: (ha-163902-m02)     <pae/>
	I0819 19:06:19.232864  452010 main.go:141] libmachine: (ha-163902-m02)     
	I0819 19:06:19.232904  452010 main.go:141] libmachine: (ha-163902-m02)   </features>
	I0819 19:06:19.232933  452010 main.go:141] libmachine: (ha-163902-m02)   <cpu mode='host-passthrough'>
	I0819 19:06:19.232961  452010 main.go:141] libmachine: (ha-163902-m02)   
	I0819 19:06:19.232980  452010 main.go:141] libmachine: (ha-163902-m02)   </cpu>
	I0819 19:06:19.232997  452010 main.go:141] libmachine: (ha-163902-m02)   <os>
	I0819 19:06:19.233014  452010 main.go:141] libmachine: (ha-163902-m02)     <type>hvm</type>
	I0819 19:06:19.233027  452010 main.go:141] libmachine: (ha-163902-m02)     <boot dev='cdrom'/>
	I0819 19:06:19.233037  452010 main.go:141] libmachine: (ha-163902-m02)     <boot dev='hd'/>
	I0819 19:06:19.233049  452010 main.go:141] libmachine: (ha-163902-m02)     <bootmenu enable='no'/>
	I0819 19:06:19.233058  452010 main.go:141] libmachine: (ha-163902-m02)   </os>
	I0819 19:06:19.233065  452010 main.go:141] libmachine: (ha-163902-m02)   <devices>
	I0819 19:06:19.233074  452010 main.go:141] libmachine: (ha-163902-m02)     <disk type='file' device='cdrom'>
	I0819 19:06:19.233092  452010 main.go:141] libmachine: (ha-163902-m02)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/boot2docker.iso'/>
	I0819 19:06:19.233104  452010 main.go:141] libmachine: (ha-163902-m02)       <target dev='hdc' bus='scsi'/>
	I0819 19:06:19.233112  452010 main.go:141] libmachine: (ha-163902-m02)       <readonly/>
	I0819 19:06:19.233124  452010 main.go:141] libmachine: (ha-163902-m02)     </disk>
	I0819 19:06:19.233153  452010 main.go:141] libmachine: (ha-163902-m02)     <disk type='file' device='disk'>
	I0819 19:06:19.233167  452010 main.go:141] libmachine: (ha-163902-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 19:06:19.233184  452010 main.go:141] libmachine: (ha-163902-m02)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/ha-163902-m02.rawdisk'/>
	I0819 19:06:19.233195  452010 main.go:141] libmachine: (ha-163902-m02)       <target dev='hda' bus='virtio'/>
	I0819 19:06:19.233206  452010 main.go:141] libmachine: (ha-163902-m02)     </disk>
	I0819 19:06:19.233215  452010 main.go:141] libmachine: (ha-163902-m02)     <interface type='network'>
	I0819 19:06:19.233228  452010 main.go:141] libmachine: (ha-163902-m02)       <source network='mk-ha-163902'/>
	I0819 19:06:19.233243  452010 main.go:141] libmachine: (ha-163902-m02)       <model type='virtio'/>
	I0819 19:06:19.233255  452010 main.go:141] libmachine: (ha-163902-m02)     </interface>
	I0819 19:06:19.233266  452010 main.go:141] libmachine: (ha-163902-m02)     <interface type='network'>
	I0819 19:06:19.233279  452010 main.go:141] libmachine: (ha-163902-m02)       <source network='default'/>
	I0819 19:06:19.233295  452010 main.go:141] libmachine: (ha-163902-m02)       <model type='virtio'/>
	I0819 19:06:19.233306  452010 main.go:141] libmachine: (ha-163902-m02)     </interface>
	I0819 19:06:19.233313  452010 main.go:141] libmachine: (ha-163902-m02)     <serial type='pty'>
	I0819 19:06:19.233326  452010 main.go:141] libmachine: (ha-163902-m02)       <target port='0'/>
	I0819 19:06:19.233337  452010 main.go:141] libmachine: (ha-163902-m02)     </serial>
	I0819 19:06:19.233348  452010 main.go:141] libmachine: (ha-163902-m02)     <console type='pty'>
	I0819 19:06:19.233364  452010 main.go:141] libmachine: (ha-163902-m02)       <target type='serial' port='0'/>
	I0819 19:06:19.233376  452010 main.go:141] libmachine: (ha-163902-m02)     </console>
	I0819 19:06:19.233387  452010 main.go:141] libmachine: (ha-163902-m02)     <rng model='virtio'>
	I0819 19:06:19.233403  452010 main.go:141] libmachine: (ha-163902-m02)       <backend model='random'>/dev/random</backend>
	I0819 19:06:19.233413  452010 main.go:141] libmachine: (ha-163902-m02)     </rng>
	I0819 19:06:19.233430  452010 main.go:141] libmachine: (ha-163902-m02)     
	I0819 19:06:19.233451  452010 main.go:141] libmachine: (ha-163902-m02)     
	I0819 19:06:19.233463  452010 main.go:141] libmachine: (ha-163902-m02)   </devices>
	I0819 19:06:19.233470  452010 main.go:141] libmachine: (ha-163902-m02) </domain>
	I0819 19:06:19.233482  452010 main.go:141] libmachine: (ha-163902-m02) 
	I0819 19:06:19.240231  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:9f:ed:ae in network default
	I0819 19:06:19.240868  452010 main.go:141] libmachine: (ha-163902-m02) Ensuring networks are active...
	I0819 19:06:19.240891  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:19.241837  452010 main.go:141] libmachine: (ha-163902-m02) Ensuring network default is active
	I0819 19:06:19.242186  452010 main.go:141] libmachine: (ha-163902-m02) Ensuring network mk-ha-163902 is active
	I0819 19:06:19.242500  452010 main.go:141] libmachine: (ha-163902-m02) Getting domain xml...
	I0819 19:06:19.243337  452010 main.go:141] libmachine: (ha-163902-m02) Creating domain...
	I0819 19:06:20.479495  452010 main.go:141] libmachine: (ha-163902-m02) Waiting to get IP...
	I0819 19:06:20.480261  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:20.480701  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:20.480744  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:20.480681  452374 retry.go:31] will retry after 209.264831ms: waiting for machine to come up
	I0819 19:06:20.691235  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:20.691678  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:20.691713  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:20.691621  452374 retry.go:31] will retry after 241.772157ms: waiting for machine to come up
	I0819 19:06:20.935152  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:20.935570  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:20.935591  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:20.935531  452374 retry.go:31] will retry after 360.106793ms: waiting for machine to come up
	I0819 19:06:21.297067  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:21.297619  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:21.297645  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:21.297574  452374 retry.go:31] will retry after 403.561399ms: waiting for machine to come up
	I0819 19:06:21.703174  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:21.703612  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:21.703644  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:21.703562  452374 retry.go:31] will retry after 752.964877ms: waiting for machine to come up
	I0819 19:06:22.458803  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:22.459336  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:22.459367  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:22.459273  452374 retry.go:31] will retry after 637.744367ms: waiting for machine to come up
	I0819 19:06:23.099345  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:23.099815  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:23.099840  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:23.099710  452374 retry.go:31] will retry after 1.154976518s: waiting for machine to come up
	I0819 19:06:24.256860  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:24.257443  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:24.257476  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:24.257385  452374 retry.go:31] will retry after 1.031712046s: waiting for machine to come up
	I0819 19:06:25.290650  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:25.291159  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:25.291188  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:25.291098  452374 retry.go:31] will retry after 1.272784033s: waiting for machine to come up
	I0819 19:06:26.565596  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:26.566129  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:26.566157  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:26.566062  452374 retry.go:31] will retry after 1.65255646s: waiting for machine to come up
	I0819 19:06:28.220964  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:28.221448  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:28.221498  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:28.221428  452374 retry.go:31] will retry after 2.031618852s: waiting for machine to come up
	I0819 19:06:30.254961  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:30.255400  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:30.255434  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:30.255356  452374 retry.go:31] will retry after 3.580532641s: waiting for machine to come up
	I0819 19:06:33.838198  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:33.838578  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:33.838619  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:33.838545  452374 retry.go:31] will retry after 3.563790311s: waiting for machine to come up
	I0819 19:06:37.404569  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:37.405172  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:37.405205  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:37.405082  452374 retry.go:31] will retry after 5.402566654s: waiting for machine to come up
	I0819 19:06:42.810280  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:42.810723  452010 main.go:141] libmachine: (ha-163902-m02) Found IP for machine: 192.168.39.162
	I0819 19:06:42.810745  452010 main.go:141] libmachine: (ha-163902-m02) Reserving static IP address...
	I0819 19:06:42.810771  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has current primary IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:42.811159  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find host DHCP lease matching {name: "ha-163902-m02", mac: "52:54:00:92:f5:c9", ip: "192.168.39.162"} in network mk-ha-163902
	I0819 19:06:42.895009  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Getting to WaitForSSH function...
	I0819 19:06:42.895034  452010 main.go:141] libmachine: (ha-163902-m02) Reserved static IP address: 192.168.39.162
	I0819 19:06:42.895047  452010 main.go:141] libmachine: (ha-163902-m02) Waiting for SSH to be available...
	I0819 19:06:42.897729  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:42.898129  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902
	I0819 19:06:42.898158  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find defined IP address of network mk-ha-163902 interface with MAC address 52:54:00:92:f5:c9
	I0819 19:06:42.898365  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Using SSH client type: external
	I0819 19:06:42.898392  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa (-rw-------)
	I0819 19:06:42.898430  452010 main.go:141] libmachine: (ha-163902-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:06:42.898444  452010 main.go:141] libmachine: (ha-163902-m02) DBG | About to run SSH command:
	I0819 19:06:42.898459  452010 main.go:141] libmachine: (ha-163902-m02) DBG | exit 0
	I0819 19:06:42.902075  452010 main.go:141] libmachine: (ha-163902-m02) DBG | SSH cmd err, output: exit status 255: 
	I0819 19:06:42.902099  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0819 19:06:42.902110  452010 main.go:141] libmachine: (ha-163902-m02) DBG | command : exit 0
	I0819 19:06:42.902117  452010 main.go:141] libmachine: (ha-163902-m02) DBG | err     : exit status 255
	I0819 19:06:42.902128  452010 main.go:141] libmachine: (ha-163902-m02) DBG | output  : 
	I0819 19:06:45.903578  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Getting to WaitForSSH function...
	I0819 19:06:45.906100  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:45.906543  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:45.906582  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:45.906718  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Using SSH client type: external
	I0819 19:06:45.906742  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa (-rw-------)
	I0819 19:06:45.906763  452010 main.go:141] libmachine: (ha-163902-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:06:45.906775  452010 main.go:141] libmachine: (ha-163902-m02) DBG | About to run SSH command:
	I0819 19:06:45.906790  452010 main.go:141] libmachine: (ha-163902-m02) DBG | exit 0
	I0819 19:06:46.029270  452010 main.go:141] libmachine: (ha-163902-m02) DBG | SSH cmd err, output: <nil>: 
	I0819 19:06:46.029593  452010 main.go:141] libmachine: (ha-163902-m02) KVM machine creation complete!
	I0819 19:06:46.029973  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetConfigRaw
	I0819 19:06:46.030719  452010 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:06:46.030971  452010 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:06:46.031146  452010 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 19:06:46.031164  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetState
	I0819 19:06:46.032447  452010 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 19:06:46.032467  452010 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 19:06:46.032478  452010 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 19:06:46.032487  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:46.034805  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.035732  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:46.035761  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.036454  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:46.036713  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.036919  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.037113  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:46.037330  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:06:46.037572  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0819 19:06:46.037584  452010 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 19:06:46.140509  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:06:46.140546  452010 main.go:141] libmachine: Detecting the provisioner...
	I0819 19:06:46.140558  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:46.143288  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.143565  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:46.143592  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.143717  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:46.143925  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.144103  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.144235  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:46.144447  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:06:46.144671  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0819 19:06:46.144687  452010 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 19:06:46.249738  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 19:06:46.249836  452010 main.go:141] libmachine: found compatible host: buildroot
	I0819 19:06:46.249850  452010 main.go:141] libmachine: Provisioning with buildroot...
	I0819 19:06:46.249865  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetMachineName
	I0819 19:06:46.250222  452010 buildroot.go:166] provisioning hostname "ha-163902-m02"
	I0819 19:06:46.250255  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetMachineName
	I0819 19:06:46.250482  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:46.253090  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.253476  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:46.253505  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.253681  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:46.253889  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.254078  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.254216  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:46.254408  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:06:46.254582  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0819 19:06:46.254595  452010 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-163902-m02 && echo "ha-163902-m02" | sudo tee /etc/hostname
	I0819 19:06:46.371136  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-163902-m02
	
	I0819 19:06:46.371175  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:46.374542  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.374922  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:46.374971  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.375211  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:46.375472  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.375704  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.375854  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:46.376074  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:06:46.376314  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0819 19:06:46.376340  452010 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-163902-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-163902-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-163902-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:06:46.490910  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:06:46.490950  452010 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 19:06:46.490969  452010 buildroot.go:174] setting up certificates
	I0819 19:06:46.490981  452010 provision.go:84] configureAuth start
	I0819 19:06:46.490991  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetMachineName
	I0819 19:06:46.491351  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetIP
	I0819 19:06:46.494171  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.494505  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:46.494534  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.494726  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:46.497255  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.497624  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:46.497657  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.497763  452010 provision.go:143] copyHostCerts
	I0819 19:06:46.497804  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:06:46.497855  452010 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 19:06:46.497868  452010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:06:46.497941  452010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 19:06:46.498019  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:06:46.498037  452010 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 19:06:46.498041  452010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:06:46.498065  452010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 19:06:46.498114  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:06:46.498131  452010 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 19:06:46.498137  452010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:06:46.498158  452010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 19:06:46.498205  452010 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.ha-163902-m02 san=[127.0.0.1 192.168.39.162 ha-163902-m02 localhost minikube]
	I0819 19:06:46.688166  452010 provision.go:177] copyRemoteCerts
	I0819 19:06:46.688231  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:06:46.688256  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:46.690890  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.691349  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:46.691376  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.691618  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:46.691848  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.692029  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:46.692134  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa Username:docker}
	I0819 19:06:46.775113  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 19:06:46.775201  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 19:06:46.800562  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 19:06:46.800648  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 19:06:46.825181  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 19:06:46.825261  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:06:46.850206  452010 provision.go:87] duration metric: took 359.20931ms to configureAuth
	I0819 19:06:46.850246  452010 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:06:46.850434  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:06:46.850526  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:46.853294  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.853661  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:46.853696  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.853864  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:46.854072  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.854237  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.854388  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:46.854571  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:06:46.854803  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0819 19:06:46.854824  452010 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:06:47.115715  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:06:47.115742  452010 main.go:141] libmachine: Checking connection to Docker...
	I0819 19:06:47.115753  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetURL
	I0819 19:06:47.117088  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Using libvirt version 6000000
	I0819 19:06:47.119409  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.119731  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:47.119762  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.119948  452010 main.go:141] libmachine: Docker is up and running!
	I0819 19:06:47.119963  452010 main.go:141] libmachine: Reticulating splines...
	I0819 19:06:47.119970  452010 client.go:171] duration metric: took 28.231904734s to LocalClient.Create
	I0819 19:06:47.120005  452010 start.go:167] duration metric: took 28.231975893s to libmachine.API.Create "ha-163902"
	I0819 19:06:47.120017  452010 start.go:293] postStartSetup for "ha-163902-m02" (driver="kvm2")
	I0819 19:06:47.120028  452010 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:06:47.120046  452010 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:06:47.120329  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:06:47.120356  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:47.122945  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.123340  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:47.123368  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.123533  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:47.123760  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:47.123954  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:47.124115  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa Username:docker}
	I0819 19:06:47.207593  452010 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:06:47.212130  452010 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:06:47.212169  452010 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 19:06:47.212264  452010 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 19:06:47.212346  452010 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 19:06:47.212359  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /etc/ssl/certs/4381592.pem
	I0819 19:06:47.212449  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:06:47.222079  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:06:47.247184  452010 start.go:296] duration metric: took 127.149883ms for postStartSetup
	I0819 19:06:47.247249  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetConfigRaw
	I0819 19:06:47.247998  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetIP
	I0819 19:06:47.250571  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.250897  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:47.250929  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.251160  452010 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:06:47.251388  452010 start.go:128] duration metric: took 28.381695209s to createHost
	I0819 19:06:47.251418  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:47.253641  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.254012  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:47.254050  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.254245  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:47.254461  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:47.254626  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:47.254783  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:47.254985  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:06:47.255157  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0819 19:06:47.255167  452010 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:06:47.361918  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094407.343319532
	
	I0819 19:06:47.361947  452010 fix.go:216] guest clock: 1724094407.343319532
	I0819 19:06:47.361954  452010 fix.go:229] Guest: 2024-08-19 19:06:47.343319532 +0000 UTC Remote: 2024-08-19 19:06:47.251402615 +0000 UTC m=+75.870319340 (delta=91.916917ms)
	I0819 19:06:47.361971  452010 fix.go:200] guest clock delta is within tolerance: 91.916917ms
	I0819 19:06:47.361977  452010 start.go:83] releasing machines lock for "ha-163902-m02", held for 28.492379147s
	I0819 19:06:47.362002  452010 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:06:47.362323  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetIP
	I0819 19:06:47.364733  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.365073  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:47.365103  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.367658  452010 out.go:177] * Found network options:
	I0819 19:06:47.369187  452010 out.go:177]   - NO_PROXY=192.168.39.227
	W0819 19:06:47.370455  452010 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 19:06:47.370493  452010 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:06:47.371252  452010 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:06:47.371472  452010 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:06:47.371583  452010 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:06:47.371625  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	W0819 19:06:47.371642  452010 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 19:06:47.371725  452010 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:06:47.371748  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:47.374201  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.374392  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.374576  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:47.374602  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.374737  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:47.374743  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:47.374766  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.374915  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:47.374972  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:47.375146  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:47.375174  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:47.375347  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:47.375362  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa Username:docker}
	I0819 19:06:47.375504  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa Username:docker}
	I0819 19:06:47.607502  452010 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:06:47.613186  452010 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:06:47.613275  452010 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:06:47.628915  452010 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:06:47.628941  452010 start.go:495] detecting cgroup driver to use...
	I0819 19:06:47.629004  452010 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:06:47.647161  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:06:47.661236  452010 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:06:47.661311  452010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:06:47.675255  452010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:06:47.690214  452010 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:06:47.802867  452010 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:06:47.950361  452010 docker.go:233] disabling docker service ...
	I0819 19:06:47.950457  452010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:06:47.964737  452010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:06:47.977713  452010 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:06:48.122143  452010 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:06:48.256398  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:06:48.270509  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:06:48.289209  452010 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:06:48.289278  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:06:48.300162  452010 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:06:48.300241  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:06:48.311567  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:06:48.322706  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:06:48.334109  452010 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:06:48.345265  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:06:48.355818  452010 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:06:48.373250  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:06:48.384349  452010 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:06:48.394369  452010 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:06:48.394450  452010 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:06:48.408323  452010 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:06:48.418407  452010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:06:48.548481  452010 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:06:48.690311  452010 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:06:48.690400  452010 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:06:48.695504  452010 start.go:563] Will wait 60s for crictl version
	I0819 19:06:48.695586  452010 ssh_runner.go:195] Run: which crictl
	I0819 19:06:48.699351  452010 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:06:48.736307  452010 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:06:48.736409  452010 ssh_runner.go:195] Run: crio --version
	I0819 19:06:48.763687  452010 ssh_runner.go:195] Run: crio --version
	I0819 19:06:48.793843  452010 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:06:48.795348  452010 out.go:177]   - env NO_PROXY=192.168.39.227
	I0819 19:06:48.796862  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetIP
	I0819 19:06:48.799508  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:48.799972  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:48.800004  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:48.800231  452010 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 19:06:48.804407  452010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:06:48.816996  452010 mustload.go:65] Loading cluster: ha-163902
	I0819 19:06:48.817328  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:06:48.817633  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:06:48.817667  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:06:48.832766  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34887
	I0819 19:06:48.833271  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:06:48.833821  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:06:48.833842  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:06:48.834204  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:06:48.834427  452010 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:06:48.836015  452010 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:06:48.836403  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:06:48.836438  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:06:48.852134  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41377
	I0819 19:06:48.852620  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:06:48.853069  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:06:48.853086  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:06:48.853453  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:06:48.853685  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:06:48.853871  452010 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902 for IP: 192.168.39.162
	I0819 19:06:48.853883  452010 certs.go:194] generating shared ca certs ...
	I0819 19:06:48.853915  452010 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:48.854117  452010 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 19:06:48.854176  452010 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 19:06:48.854190  452010 certs.go:256] generating profile certs ...
	I0819 19:06:48.854287  452010 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.key
	I0819 19:06:48.854321  452010 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.4731d0f4
	I0819 19:06:48.854347  452010 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.4731d0f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227 192.168.39.162 192.168.39.254]
	I0819 19:06:48.963236  452010 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.4731d0f4 ...
	I0819 19:06:48.963267  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.4731d0f4: {Name:mkc59270b5f28bfe677695dfd975da72759a5572 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:48.963460  452010 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.4731d0f4 ...
	I0819 19:06:48.963476  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.4731d0f4: {Name:mkd7343b2ea6812d10a2f5d6ca9281b67dd3ee9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:48.963569  452010 certs.go:381] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.4731d0f4 -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt
	I0819 19:06:48.963742  452010 certs.go:385] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.4731d0f4 -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key
	I0819 19:06:48.963922  452010 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key
	I0819 19:06:48.963942  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 19:06:48.963962  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 19:06:48.963981  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 19:06:48.963999  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 19:06:48.964019  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 19:06:48.964033  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 19:06:48.964051  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 19:06:48.964068  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 19:06:48.964139  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 19:06:48.964180  452010 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 19:06:48.964194  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 19:06:48.964234  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:06:48.964266  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:06:48.964294  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 19:06:48.964347  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:06:48.964387  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem -> /usr/share/ca-certificates/438159.pem
	I0819 19:06:48.964410  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /usr/share/ca-certificates/4381592.pem
	I0819 19:06:48.964427  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:06:48.964486  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:06:48.967352  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:06:48.967855  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:06:48.967878  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:06:48.968056  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:06:48.968287  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:06:48.968440  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:06:48.968605  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:06:49.041602  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 19:06:49.046052  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 19:06:49.062113  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 19:06:49.066289  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0819 19:06:49.077555  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 19:06:49.081754  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 19:06:49.092067  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 19:06:49.096287  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 19:06:49.106703  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 19:06:49.110861  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 19:06:49.122396  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 19:06:49.126491  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 19:06:49.137756  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:06:49.162262  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:06:49.186698  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:06:49.211042  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 19:06:49.235218  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 19:06:49.259107  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:06:49.283065  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:06:49.306565  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:06:49.331781  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 19:06:49.356156  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 19:06:49.381191  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:06:49.406139  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 19:06:49.423202  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0819 19:06:49.440609  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 19:06:49.457592  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 19:06:49.474446  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 19:06:49.492390  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 19:06:49.508948  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 19:06:49.525867  452010 ssh_runner.go:195] Run: openssl version
	I0819 19:06:49.531624  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 19:06:49.542714  452010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 19:06:49.547377  452010 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 19:06:49.547459  452010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 19:06:49.553205  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 19:06:49.564319  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 19:06:49.575725  452010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 19:06:49.580233  452010 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 19:06:49.580294  452010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 19:06:49.586702  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:06:49.598456  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:06:49.610315  452010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:06:49.614956  452010 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:06:49.615051  452010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:06:49.620779  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:06:49.631576  452010 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:06:49.636283  452010 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 19:06:49.636350  452010 kubeadm.go:934] updating node {m02 192.168.39.162 8443 v1.31.0 crio true true} ...
	I0819 19:06:49.636461  452010 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-163902-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:06:49.636488  452010 kube-vip.go:115] generating kube-vip config ...
	I0819 19:06:49.636529  452010 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 19:06:49.654189  452010 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 19:06:49.654277  452010 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 19:06:49.654357  452010 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:06:49.666661  452010 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 19:06:49.666736  452010 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 19:06:49.679260  452010 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 19:06:49.679297  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 19:06:49.679372  452010 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0819 19:06:49.679396  452010 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0819 19:06:49.679378  452010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 19:06:49.684043  452010 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 19:06:49.684086  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 19:06:50.618723  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 19:06:50.618830  452010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 19:06:50.623803  452010 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 19:06:50.623852  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 19:06:50.680835  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:06:50.716285  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 19:06:50.716393  452010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 19:06:50.724629  452010 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 19:06:50.724676  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 19:06:51.161810  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 19:06:51.171528  452010 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 19:06:51.188306  452010 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:06:51.205300  452010 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 19:06:51.221816  452010 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 19:06:51.225777  452010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:06:51.238152  452010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:06:51.357742  452010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:06:51.375585  452010 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:06:51.375948  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:06:51.375996  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:06:51.391834  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43635
	I0819 19:06:51.392385  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:06:51.392946  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:06:51.392977  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:06:51.393420  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:06:51.393636  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:06:51.393843  452010 start.go:317] joinCluster: &{Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:06:51.393973  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 19:06:51.393989  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:06:51.397091  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:06:51.397629  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:06:51.397667  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:06:51.397820  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:06:51.398038  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:06:51.398241  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:06:51.398386  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:06:51.536333  452010 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:06:51.536402  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token am366n.it73me2b53s38qnq --discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-163902-m02 --control-plane --apiserver-advertise-address=192.168.39.162 --apiserver-bind-port=8443"
	I0819 19:07:13.043284  452010 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token am366n.it73me2b53s38qnq --discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-163902-m02 --control-plane --apiserver-advertise-address=192.168.39.162 --apiserver-bind-port=8443": (21.506847863s)
	I0819 19:07:13.043332  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 19:07:13.565976  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-163902-m02 minikube.k8s.io/updated_at=2024_08_19T19_07_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8 minikube.k8s.io/name=ha-163902 minikube.k8s.io/primary=false
	I0819 19:07:13.664942  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-163902-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 19:07:13.776384  452010 start.go:319] duration metric: took 22.382536414s to joinCluster
	I0819 19:07:13.776496  452010 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:07:13.776816  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:07:13.778036  452010 out.go:177] * Verifying Kubernetes components...
	I0819 19:07:13.779490  452010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:07:14.029940  452010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:07:14.075916  452010 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:07:14.076250  452010 kapi.go:59] client config for ha-163902: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.key", CAFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 19:07:14.076336  452010 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.227:8443
	I0819 19:07:14.076646  452010 node_ready.go:35] waiting up to 6m0s for node "ha-163902-m02" to be "Ready" ...
	I0819 19:07:14.076762  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:14.076774  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:14.076786  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:14.076793  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:14.096746  452010 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0819 19:07:14.577779  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:14.577815  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:14.577828  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:14.577834  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:14.586954  452010 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0819 19:07:15.076937  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:15.076970  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:15.076982  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:15.076988  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:15.083444  452010 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 19:07:15.577281  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:15.577306  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:15.577314  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:15.577319  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:15.580559  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:16.077342  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:16.077367  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:16.077376  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:16.077380  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:16.080563  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:16.081250  452010 node_ready.go:53] node "ha-163902-m02" has status "Ready":"False"
	I0819 19:07:16.577752  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:16.577776  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:16.577784  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:16.577790  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:16.581579  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:17.077503  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:17.077529  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:17.077538  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:17.077542  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:17.080905  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:17.577841  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:17.577883  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:17.577892  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:17.577896  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:17.581363  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:18.077999  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:18.078032  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:18.078042  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:18.078047  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:18.081372  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:18.082174  452010 node_ready.go:53] node "ha-163902-m02" has status "Ready":"False"
	I0819 19:07:18.577568  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:18.577592  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:18.577601  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:18.577604  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:18.580896  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:19.077445  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:19.077473  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:19.077482  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:19.077487  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:19.080726  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:19.577746  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:19.577772  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:19.577781  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:19.577785  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:19.582182  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:07:20.076987  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:20.077012  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:20.077019  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:20.077024  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:20.122989  452010 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I0819 19:07:20.123408  452010 node_ready.go:53] node "ha-163902-m02" has status "Ready":"False"
	I0819 19:07:20.577872  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:20.577899  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:20.577910  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:20.577915  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:20.581604  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:21.077636  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:21.077661  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:21.077669  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:21.077674  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:21.082054  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:07:21.577535  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:21.577560  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:21.577569  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:21.577574  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:21.581033  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:22.076938  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:22.076966  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:22.076974  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:22.076978  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:22.080744  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:22.577705  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:22.577730  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:22.577738  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:22.577743  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:22.581201  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:22.581773  452010 node_ready.go:53] node "ha-163902-m02" has status "Ready":"False"
	I0819 19:07:23.077034  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:23.077061  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:23.077070  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:23.077076  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:23.079992  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:07:23.577915  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:23.577944  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:23.577957  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:23.577964  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:23.581249  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:24.077818  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:24.077849  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:24.077860  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:24.077868  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:24.081464  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:24.577323  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:24.577349  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:24.577358  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:24.577362  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:24.580849  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:25.076926  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:25.076959  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:25.076971  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:25.076977  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:25.080348  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:25.080966  452010 node_ready.go:53] node "ha-163902-m02" has status "Ready":"False"
	I0819 19:07:25.577361  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:25.577386  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:25.577395  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:25.577400  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:25.580864  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:26.077798  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:26.077834  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:26.077845  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:26.077849  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:26.080955  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:26.577189  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:26.577217  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:26.577226  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:26.577231  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:26.580600  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:27.077563  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:27.077586  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:27.077595  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:27.077600  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:27.081458  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:27.082293  452010 node_ready.go:53] node "ha-163902-m02" has status "Ready":"False"
	I0819 19:07:27.577810  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:27.577835  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:27.577844  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:27.577847  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:27.581936  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:07:28.077195  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:28.077225  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:28.077238  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:28.077243  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:28.080523  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:28.577630  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:28.577656  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:28.577665  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:28.577669  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:28.580980  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:29.077582  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:29.077612  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:29.077620  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:29.077624  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:29.080971  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:29.576863  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:29.576892  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:29.576901  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:29.576905  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:29.580440  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:29.580954  452010 node_ready.go:53] node "ha-163902-m02" has status "Ready":"False"
	I0819 19:07:30.077368  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:30.077396  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:30.077404  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:30.077409  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:30.080700  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:30.577721  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:30.577747  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:30.577756  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:30.577760  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:30.581399  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:31.077219  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:31.077245  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:31.077253  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:31.077258  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:31.080712  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:31.577280  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:31.577308  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:31.577319  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:31.577325  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:31.580843  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:31.581534  452010 node_ready.go:53] node "ha-163902-m02" has status "Ready":"False"
	I0819 19:07:32.076972  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:32.077004  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.077015  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.077025  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.080781  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:32.081404  452010 node_ready.go:49] node "ha-163902-m02" has status "Ready":"True"
	I0819 19:07:32.081432  452010 node_ready.go:38] duration metric: took 18.004765365s for node "ha-163902-m02" to be "Ready" ...
	I0819 19:07:32.081445  452010 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:07:32.081544  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:07:32.081558  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.081568  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.081574  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.086806  452010 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 19:07:32.093034  452010 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nkths" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.093169  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-nkths
	I0819 19:07:32.093180  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.093187  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.093191  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.096615  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:32.097343  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:32.097359  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.097367  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.097370  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.100137  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:07:32.100755  452010 pod_ready.go:93] pod "coredns-6f6b679f8f-nkths" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:32.100775  452010 pod_ready.go:82] duration metric: took 7.709242ms for pod "coredns-6f6b679f8f-nkths" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.100785  452010 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-wmp8k" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.100846  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-wmp8k
	I0819 19:07:32.100854  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.100861  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.100866  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.103592  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:07:32.104256  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:32.104275  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.104282  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.104286  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.106843  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:07:32.107384  452010 pod_ready.go:93] pod "coredns-6f6b679f8f-wmp8k" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:32.107408  452010 pod_ready.go:82] duration metric: took 6.616047ms for pod "coredns-6f6b679f8f-wmp8k" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.107421  452010 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.107492  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-163902
	I0819 19:07:32.107502  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.107510  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.107517  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.110212  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:07:32.111449  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:32.111468  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.111479  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.111486  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.113751  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:07:32.114260  452010 pod_ready.go:93] pod "etcd-ha-163902" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:32.114280  452010 pod_ready.go:82] duration metric: took 6.851673ms for pod "etcd-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.114289  452010 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.114397  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-163902-m02
	I0819 19:07:32.114409  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.114416  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.114420  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.117190  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:07:32.117911  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:32.117929  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.117940  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.117946  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.120664  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:07:32.121157  452010 pod_ready.go:93] pod "etcd-ha-163902-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:32.121179  452010 pod_ready.go:82] duration metric: took 6.88181ms for pod "etcd-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.121198  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.277612  452010 request.go:632] Waited for 156.338168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902
	I0819 19:07:32.277674  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902
	I0819 19:07:32.277680  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.277688  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.277692  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.280988  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:32.478066  452010 request.go:632] Waited for 196.429632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:32.478153  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:32.478161  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.478176  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.478187  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.481514  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:32.482098  452010 pod_ready.go:93] pod "kube-apiserver-ha-163902" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:32.482121  452010 pod_ready.go:82] duration metric: took 360.912121ms for pod "kube-apiserver-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.482132  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.677074  452010 request.go:632] Waited for 194.863482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902-m02
	I0819 19:07:32.677193  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902-m02
	I0819 19:07:32.677205  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.677216  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.677226  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.680997  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:32.877001  452010 request.go:632] Waited for 195.332988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:32.877090  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:32.877095  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.877103  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.877107  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.880190  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:32.880934  452010 pod_ready.go:93] pod "kube-apiserver-ha-163902-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:32.880962  452010 pod_ready.go:82] duration metric: took 398.822495ms for pod "kube-apiserver-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.880976  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:33.077931  452010 request.go:632] Waited for 196.851229ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902
	I0819 19:07:33.077997  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902
	I0819 19:07:33.078002  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:33.078019  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:33.078025  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:33.082066  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:07:33.277433  452010 request.go:632] Waited for 194.507863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:33.277499  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:33.277505  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:33.277515  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:33.277521  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:33.281107  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:33.281792  452010 pod_ready.go:93] pod "kube-controller-manager-ha-163902" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:33.281813  452010 pod_ready.go:82] duration metric: took 400.829541ms for pod "kube-controller-manager-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:33.281824  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:33.477892  452010 request.go:632] Waited for 195.967397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902-m02
	I0819 19:07:33.477965  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902-m02
	I0819 19:07:33.477973  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:33.477984  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:33.477991  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:33.482174  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:07:33.677478  452010 request.go:632] Waited for 194.399986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:33.677576  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:33.677585  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:33.677598  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:33.677606  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:33.680890  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:33.681698  452010 pod_ready.go:93] pod "kube-controller-manager-ha-163902-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:33.681724  452010 pod_ready.go:82] duration metric: took 399.894379ms for pod "kube-controller-manager-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:33.681735  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4whvs" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:33.877888  452010 request.go:632] Waited for 196.072309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4whvs
	I0819 19:07:33.877968  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4whvs
	I0819 19:07:33.877974  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:33.877982  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:33.877986  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:33.881723  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:34.077843  452010 request.go:632] Waited for 195.46411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:34.077917  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:34.077923  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:34.077933  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:34.077945  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:34.081664  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:34.082198  452010 pod_ready.go:93] pod "kube-proxy-4whvs" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:34.082219  452010 pod_ready.go:82] duration metric: took 400.478539ms for pod "kube-proxy-4whvs" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:34.082229  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wxrsv" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:34.277406  452010 request.go:632] Waited for 195.097969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxrsv
	I0819 19:07:34.277471  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxrsv
	I0819 19:07:34.277476  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:34.277484  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:34.277488  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:34.280924  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:34.478028  452010 request.go:632] Waited for 196.395749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:34.478117  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:34.478128  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:34.478141  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:34.478147  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:34.481981  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:34.482567  452010 pod_ready.go:93] pod "kube-proxy-wxrsv" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:34.482589  452010 pod_ready.go:82] duration metric: took 400.353478ms for pod "kube-proxy-wxrsv" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:34.482598  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:34.677654  452010 request.go:632] Waited for 194.973644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902
	I0819 19:07:34.677743  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902
	I0819 19:07:34.677766  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:34.677795  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:34.677806  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:34.681127  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:34.877054  452010 request.go:632] Waited for 195.298137ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:34.877122  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:34.877127  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:34.877150  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:34.877154  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:34.880635  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:34.881396  452010 pod_ready.go:93] pod "kube-scheduler-ha-163902" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:34.881421  452010 pod_ready.go:82] duration metric: took 398.815157ms for pod "kube-scheduler-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:34.881434  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:35.077465  452010 request.go:632] Waited for 195.950631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902-m02
	I0819 19:07:35.077565  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902-m02
	I0819 19:07:35.077575  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:35.077583  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:35.077587  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:35.081189  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:35.277171  452010 request.go:632] Waited for 195.326146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:35.277249  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:35.277254  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:35.277262  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:35.277268  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:35.280625  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:35.281104  452010 pod_ready.go:93] pod "kube-scheduler-ha-163902-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:35.281151  452010 pod_ready.go:82] duration metric: took 399.707427ms for pod "kube-scheduler-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:35.281171  452010 pod_ready.go:39] duration metric: took 3.199703609s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:07:35.281196  452010 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:07:35.281258  452010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:07:35.297202  452010 api_server.go:72] duration metric: took 21.520658024s to wait for apiserver process to appear ...
	I0819 19:07:35.297237  452010 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:07:35.297264  452010 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0819 19:07:35.302264  452010 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0819 19:07:35.302357  452010 round_trippers.go:463] GET https://192.168.39.227:8443/version
	I0819 19:07:35.302365  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:35.302373  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:35.302377  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:35.303359  452010 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 19:07:35.303527  452010 api_server.go:141] control plane version: v1.31.0
	I0819 19:07:35.303549  452010 api_server.go:131] duration metric: took 6.303973ms to wait for apiserver health ...
	I0819 19:07:35.303559  452010 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:07:35.477939  452010 request.go:632] Waited for 174.284855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:07:35.478039  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:07:35.478057  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:35.478068  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:35.478081  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:35.487457  452010 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0819 19:07:35.491884  452010 system_pods.go:59] 17 kube-system pods found
	I0819 19:07:35.491929  452010 system_pods.go:61] "coredns-6f6b679f8f-nkths" [b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2] Running
	I0819 19:07:35.491937  452010 system_pods.go:61] "coredns-6f6b679f8f-wmp8k" [ca2ee9c4-992a-4251-a717-9843b7b41894] Running
	I0819 19:07:35.491943  452010 system_pods.go:61] "etcd-ha-163902" [fc3d22fd-d8b3-45ca-a6f6-9898285f85d4] Running
	I0819 19:07:35.491949  452010 system_pods.go:61] "etcd-ha-163902-m02" [f05c7c5e-de7b-4963-b897-204a95440e0d] Running
	I0819 19:07:35.491954  452010 system_pods.go:61] "kindnet-97cnn" [75284cfb-20b5-4675-b53e-db4130cc6722] Running
	I0819 19:07:35.491958  452010 system_pods.go:61] "kindnet-bpwjl" [624275c2-a670-4cc0-a11c-70f3e1b78946] Running
	I0819 19:07:35.491963  452010 system_pods.go:61] "kube-apiserver-ha-163902" [ab456c18-e3cf-462b-8028-55ddf9b36306] Running
	I0819 19:07:35.491968  452010 system_pods.go:61] "kube-apiserver-ha-163902-m02" [f11ec84f-77aa-4003-a786-f6c54b4c14bd] Running
	I0819 19:07:35.491974  452010 system_pods.go:61] "kube-controller-manager-ha-163902" [efeafbc7-7719-4632-9215-fd3c3ca09cc5] Running
	I0819 19:07:35.491980  452010 system_pods.go:61] "kube-controller-manager-ha-163902-m02" [2d80759a-8137-4e3c-a621-ba92532c9d9b] Running
	I0819 19:07:35.491985  452010 system_pods.go:61] "kube-proxy-4whvs" [b241f2e5-d83c-432b-bdce-c26940efa096] Running
	I0819 19:07:35.491990  452010 system_pods.go:61] "kube-proxy-wxrsv" [3d78c5e8-eed2-4da5-9425-76f96e2d8ed6] Running
	I0819 19:07:35.491998  452010 system_pods.go:61] "kube-scheduler-ha-163902" [ee9d8a96-56d9-4bc2-9829-bdf4a0ec0749] Running
	I0819 19:07:35.492004  452010 system_pods.go:61] "kube-scheduler-ha-163902-m02" [90a559e2-5c2a-4db8-b9d4-21362177d686] Running
	I0819 19:07:35.492010  452010 system_pods.go:61] "kube-vip-ha-163902" [3143571f-0eb1-4f9a-ada5-27fda4724d18] Running
	I0819 19:07:35.492014  452010 system_pods.go:61] "kube-vip-ha-163902-m02" [ccd47f0d-1f18-4146-808f-5436d25c859f] Running
	I0819 19:07:35.492019  452010 system_pods.go:61] "storage-provisioner" [05dffa5a-3372-4a79-94ad-33d14a4b7fd0] Running
	I0819 19:07:35.492029  452010 system_pods.go:74] duration metric: took 188.461842ms to wait for pod list to return data ...
	I0819 19:07:35.492044  452010 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:07:35.677485  452010 request.go:632] Waited for 185.337326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/default/serviceaccounts
	I0819 19:07:35.677576  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/default/serviceaccounts
	I0819 19:07:35.677584  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:35.677594  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:35.677601  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:35.682572  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:07:35.682838  452010 default_sa.go:45] found service account: "default"
	I0819 19:07:35.682856  452010 default_sa.go:55] duration metric: took 190.80577ms for default service account to be created ...
	I0819 19:07:35.682867  452010 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:07:35.877325  452010 request.go:632] Waited for 194.369278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:07:35.877408  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:07:35.877416  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:35.877428  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:35.877434  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:35.882295  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:07:35.887802  452010 system_pods.go:86] 17 kube-system pods found
	I0819 19:07:35.887838  452010 system_pods.go:89] "coredns-6f6b679f8f-nkths" [b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2] Running
	I0819 19:07:35.887844  452010 system_pods.go:89] "coredns-6f6b679f8f-wmp8k" [ca2ee9c4-992a-4251-a717-9843b7b41894] Running
	I0819 19:07:35.887849  452010 system_pods.go:89] "etcd-ha-163902" [fc3d22fd-d8b3-45ca-a6f6-9898285f85d4] Running
	I0819 19:07:35.887853  452010 system_pods.go:89] "etcd-ha-163902-m02" [f05c7c5e-de7b-4963-b897-204a95440e0d] Running
	I0819 19:07:35.887856  452010 system_pods.go:89] "kindnet-97cnn" [75284cfb-20b5-4675-b53e-db4130cc6722] Running
	I0819 19:07:35.887860  452010 system_pods.go:89] "kindnet-bpwjl" [624275c2-a670-4cc0-a11c-70f3e1b78946] Running
	I0819 19:07:35.887863  452010 system_pods.go:89] "kube-apiserver-ha-163902" [ab456c18-e3cf-462b-8028-55ddf9b36306] Running
	I0819 19:07:35.887867  452010 system_pods.go:89] "kube-apiserver-ha-163902-m02" [f11ec84f-77aa-4003-a786-f6c54b4c14bd] Running
	I0819 19:07:35.887870  452010 system_pods.go:89] "kube-controller-manager-ha-163902" [efeafbc7-7719-4632-9215-fd3c3ca09cc5] Running
	I0819 19:07:35.887874  452010 system_pods.go:89] "kube-controller-manager-ha-163902-m02" [2d80759a-8137-4e3c-a621-ba92532c9d9b] Running
	I0819 19:07:35.887877  452010 system_pods.go:89] "kube-proxy-4whvs" [b241f2e5-d83c-432b-bdce-c26940efa096] Running
	I0819 19:07:35.887880  452010 system_pods.go:89] "kube-proxy-wxrsv" [3d78c5e8-eed2-4da5-9425-76f96e2d8ed6] Running
	I0819 19:07:35.887883  452010 system_pods.go:89] "kube-scheduler-ha-163902" [ee9d8a96-56d9-4bc2-9829-bdf4a0ec0749] Running
	I0819 19:07:35.887889  452010 system_pods.go:89] "kube-scheduler-ha-163902-m02" [90a559e2-5c2a-4db8-b9d4-21362177d686] Running
	I0819 19:07:35.887894  452010 system_pods.go:89] "kube-vip-ha-163902" [3143571f-0eb1-4f9a-ada5-27fda4724d18] Running
	I0819 19:07:35.887899  452010 system_pods.go:89] "kube-vip-ha-163902-m02" [ccd47f0d-1f18-4146-808f-5436d25c859f] Running
	I0819 19:07:35.887903  452010 system_pods.go:89] "storage-provisioner" [05dffa5a-3372-4a79-94ad-33d14a4b7fd0] Running
	I0819 19:07:35.887913  452010 system_pods.go:126] duration metric: took 205.03521ms to wait for k8s-apps to be running ...
	I0819 19:07:35.887927  452010 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:07:35.887976  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:07:35.903872  452010 system_svc.go:56] duration metric: took 15.92984ms WaitForService to wait for kubelet
	I0819 19:07:35.903906  452010 kubeadm.go:582] duration metric: took 22.127369971s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:07:35.903927  452010 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:07:36.077399  452010 request.go:632] Waited for 173.365979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes
	I0819 19:07:36.077501  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes
	I0819 19:07:36.077508  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:36.077519  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:36.077545  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:36.081185  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:36.081895  452010 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:07:36.081947  452010 node_conditions.go:123] node cpu capacity is 2
	I0819 19:07:36.081961  452010 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:07:36.081969  452010 node_conditions.go:123] node cpu capacity is 2
	I0819 19:07:36.081976  452010 node_conditions.go:105] duration metric: took 178.043214ms to run NodePressure ...
	I0819 19:07:36.081992  452010 start.go:241] waiting for startup goroutines ...
	I0819 19:07:36.082023  452010 start.go:255] writing updated cluster config ...
	I0819 19:07:36.084484  452010 out.go:201] 
	I0819 19:07:36.086155  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:07:36.086268  452010 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:07:36.088019  452010 out.go:177] * Starting "ha-163902-m03" control-plane node in "ha-163902" cluster
	I0819 19:07:36.089042  452010 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:07:36.089068  452010 cache.go:56] Caching tarball of preloaded images
	I0819 19:07:36.089224  452010 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:07:36.089237  452010 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 19:07:36.089368  452010 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:07:36.089608  452010 start.go:360] acquireMachinesLock for ha-163902-m03: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:07:36.089667  452010 start.go:364] duration metric: took 35.517µs to acquireMachinesLock for "ha-163902-m03"
	I0819 19:07:36.089692  452010 start.go:93] Provisioning new machine with config: &{Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:07:36.089832  452010 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0819 19:07:36.091440  452010 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 19:07:36.091555  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:07:36.091598  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:07:36.107125  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36545
	I0819 19:07:36.107690  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:07:36.108195  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:07:36.108219  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:07:36.108543  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:07:36.108692  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetMachineName
	I0819 19:07:36.108853  452010 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:07:36.108990  452010 start.go:159] libmachine.API.Create for "ha-163902" (driver="kvm2")
	I0819 19:07:36.109016  452010 client.go:168] LocalClient.Create starting
	I0819 19:07:36.109049  452010 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem
	I0819 19:07:36.109084  452010 main.go:141] libmachine: Decoding PEM data...
	I0819 19:07:36.109099  452010 main.go:141] libmachine: Parsing certificate...
	I0819 19:07:36.109171  452010 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem
	I0819 19:07:36.109195  452010 main.go:141] libmachine: Decoding PEM data...
	I0819 19:07:36.109206  452010 main.go:141] libmachine: Parsing certificate...
	I0819 19:07:36.109243  452010 main.go:141] libmachine: Running pre-create checks...
	I0819 19:07:36.109252  452010 main.go:141] libmachine: (ha-163902-m03) Calling .PreCreateCheck
	I0819 19:07:36.109410  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetConfigRaw
	I0819 19:07:36.109742  452010 main.go:141] libmachine: Creating machine...
	I0819 19:07:36.109756  452010 main.go:141] libmachine: (ha-163902-m03) Calling .Create
	I0819 19:07:36.109928  452010 main.go:141] libmachine: (ha-163902-m03) Creating KVM machine...
	I0819 19:07:36.111348  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found existing default KVM network
	I0819 19:07:36.111504  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found existing private KVM network mk-ha-163902
	I0819 19:07:36.111700  452010 main.go:141] libmachine: (ha-163902-m03) Setting up store path in /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03 ...
	I0819 19:07:36.111728  452010 main.go:141] libmachine: (ha-163902-m03) Building disk image from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 19:07:36.111823  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:36.111693  452804 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:07:36.111932  452010 main.go:141] libmachine: (ha-163902-m03) Downloading /home/jenkins/minikube-integration/19423-430949/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 19:07:36.400593  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:36.400468  452804 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa...
	I0819 19:07:36.505423  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:36.505277  452804 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/ha-163902-m03.rawdisk...
	I0819 19:07:36.505469  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Writing magic tar header
	I0819 19:07:36.505526  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Writing SSH key tar header
	I0819 19:07:36.505560  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:36.505423  452804 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03 ...
	I0819 19:07:36.505589  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03
	I0819 19:07:36.505605  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines
	I0819 19:07:36.505623  452010 main.go:141] libmachine: (ha-163902-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03 (perms=drwx------)
	I0819 19:07:36.505638  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:07:36.505652  452010 main.go:141] libmachine: (ha-163902-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines (perms=drwxr-xr-x)
	I0819 19:07:36.505673  452010 main.go:141] libmachine: (ha-163902-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube (perms=drwxr-xr-x)
	I0819 19:07:36.505688  452010 main.go:141] libmachine: (ha-163902-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949 (perms=drwxrwxr-x)
	I0819 19:07:36.505702  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949
	I0819 19:07:36.505718  452010 main.go:141] libmachine: (ha-163902-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 19:07:36.505731  452010 main.go:141] libmachine: (ha-163902-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 19:07:36.505745  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 19:07:36.505760  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Checking permissions on dir: /home/jenkins
	I0819 19:07:36.505778  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Checking permissions on dir: /home
	I0819 19:07:36.505790  452010 main.go:141] libmachine: (ha-163902-m03) Creating domain...
	I0819 19:07:36.505807  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Skipping /home - not owner
	I0819 19:07:36.506793  452010 main.go:141] libmachine: (ha-163902-m03) define libvirt domain using xml: 
	I0819 19:07:36.506815  452010 main.go:141] libmachine: (ha-163902-m03) <domain type='kvm'>
	I0819 19:07:36.506826  452010 main.go:141] libmachine: (ha-163902-m03)   <name>ha-163902-m03</name>
	I0819 19:07:36.506837  452010 main.go:141] libmachine: (ha-163902-m03)   <memory unit='MiB'>2200</memory>
	I0819 19:07:36.506850  452010 main.go:141] libmachine: (ha-163902-m03)   <vcpu>2</vcpu>
	I0819 19:07:36.506860  452010 main.go:141] libmachine: (ha-163902-m03)   <features>
	I0819 19:07:36.506870  452010 main.go:141] libmachine: (ha-163902-m03)     <acpi/>
	I0819 19:07:36.506878  452010 main.go:141] libmachine: (ha-163902-m03)     <apic/>
	I0819 19:07:36.506886  452010 main.go:141] libmachine: (ha-163902-m03)     <pae/>
	I0819 19:07:36.506895  452010 main.go:141] libmachine: (ha-163902-m03)     
	I0819 19:07:36.506905  452010 main.go:141] libmachine: (ha-163902-m03)   </features>
	I0819 19:07:36.506918  452010 main.go:141] libmachine: (ha-163902-m03)   <cpu mode='host-passthrough'>
	I0819 19:07:36.506929  452010 main.go:141] libmachine: (ha-163902-m03)   
	I0819 19:07:36.506938  452010 main.go:141] libmachine: (ha-163902-m03)   </cpu>
	I0819 19:07:36.506946  452010 main.go:141] libmachine: (ha-163902-m03)   <os>
	I0819 19:07:36.506962  452010 main.go:141] libmachine: (ha-163902-m03)     <type>hvm</type>
	I0819 19:07:36.506972  452010 main.go:141] libmachine: (ha-163902-m03)     <boot dev='cdrom'/>
	I0819 19:07:36.506982  452010 main.go:141] libmachine: (ha-163902-m03)     <boot dev='hd'/>
	I0819 19:07:36.506994  452010 main.go:141] libmachine: (ha-163902-m03)     <bootmenu enable='no'/>
	I0819 19:07:36.507002  452010 main.go:141] libmachine: (ha-163902-m03)   </os>
	I0819 19:07:36.507013  452010 main.go:141] libmachine: (ha-163902-m03)   <devices>
	I0819 19:07:36.507024  452010 main.go:141] libmachine: (ha-163902-m03)     <disk type='file' device='cdrom'>
	I0819 19:07:36.507041  452010 main.go:141] libmachine: (ha-163902-m03)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/boot2docker.iso'/>
	I0819 19:07:36.507057  452010 main.go:141] libmachine: (ha-163902-m03)       <target dev='hdc' bus='scsi'/>
	I0819 19:07:36.507068  452010 main.go:141] libmachine: (ha-163902-m03)       <readonly/>
	I0819 19:07:36.507077  452010 main.go:141] libmachine: (ha-163902-m03)     </disk>
	I0819 19:07:36.507090  452010 main.go:141] libmachine: (ha-163902-m03)     <disk type='file' device='disk'>
	I0819 19:07:36.507099  452010 main.go:141] libmachine: (ha-163902-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 19:07:36.507110  452010 main.go:141] libmachine: (ha-163902-m03)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/ha-163902-m03.rawdisk'/>
	I0819 19:07:36.507117  452010 main.go:141] libmachine: (ha-163902-m03)       <target dev='hda' bus='virtio'/>
	I0819 19:07:36.507122  452010 main.go:141] libmachine: (ha-163902-m03)     </disk>
	I0819 19:07:36.507131  452010 main.go:141] libmachine: (ha-163902-m03)     <interface type='network'>
	I0819 19:07:36.507137  452010 main.go:141] libmachine: (ha-163902-m03)       <source network='mk-ha-163902'/>
	I0819 19:07:36.507144  452010 main.go:141] libmachine: (ha-163902-m03)       <model type='virtio'/>
	I0819 19:07:36.507152  452010 main.go:141] libmachine: (ha-163902-m03)     </interface>
	I0819 19:07:36.507157  452010 main.go:141] libmachine: (ha-163902-m03)     <interface type='network'>
	I0819 19:07:36.507171  452010 main.go:141] libmachine: (ha-163902-m03)       <source network='default'/>
	I0819 19:07:36.507178  452010 main.go:141] libmachine: (ha-163902-m03)       <model type='virtio'/>
	I0819 19:07:36.507184  452010 main.go:141] libmachine: (ha-163902-m03)     </interface>
	I0819 19:07:36.507190  452010 main.go:141] libmachine: (ha-163902-m03)     <serial type='pty'>
	I0819 19:07:36.507195  452010 main.go:141] libmachine: (ha-163902-m03)       <target port='0'/>
	I0819 19:07:36.507205  452010 main.go:141] libmachine: (ha-163902-m03)     </serial>
	I0819 19:07:36.507211  452010 main.go:141] libmachine: (ha-163902-m03)     <console type='pty'>
	I0819 19:07:36.507220  452010 main.go:141] libmachine: (ha-163902-m03)       <target type='serial' port='0'/>
	I0819 19:07:36.507226  452010 main.go:141] libmachine: (ha-163902-m03)     </console>
	I0819 19:07:36.507232  452010 main.go:141] libmachine: (ha-163902-m03)     <rng model='virtio'>
	I0819 19:07:36.507241  452010 main.go:141] libmachine: (ha-163902-m03)       <backend model='random'>/dev/random</backend>
	I0819 19:07:36.507248  452010 main.go:141] libmachine: (ha-163902-m03)     </rng>
	I0819 19:07:36.507281  452010 main.go:141] libmachine: (ha-163902-m03)     
	I0819 19:07:36.507305  452010 main.go:141] libmachine: (ha-163902-m03)     
	I0819 19:07:36.507323  452010 main.go:141] libmachine: (ha-163902-m03)   </devices>
	I0819 19:07:36.507335  452010 main.go:141] libmachine: (ha-163902-m03) </domain>
	I0819 19:07:36.507348  452010 main.go:141] libmachine: (ha-163902-m03) 
	I0819 19:07:36.514621  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:01:59:d5 in network default
	I0819 19:07:36.515251  452010 main.go:141] libmachine: (ha-163902-m03) Ensuring networks are active...
	I0819 19:07:36.515280  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:36.516061  452010 main.go:141] libmachine: (ha-163902-m03) Ensuring network default is active
	I0819 19:07:36.516373  452010 main.go:141] libmachine: (ha-163902-m03) Ensuring network mk-ha-163902 is active
	I0819 19:07:36.516729  452010 main.go:141] libmachine: (ha-163902-m03) Getting domain xml...
	I0819 19:07:36.517399  452010 main.go:141] libmachine: (ha-163902-m03) Creating domain...
	I0819 19:07:37.778280  452010 main.go:141] libmachine: (ha-163902-m03) Waiting to get IP...
	I0819 19:07:37.778990  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:37.779391  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:37.779424  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:37.779376  452804 retry.go:31] will retry after 201.989618ms: waiting for machine to come up
	I0819 19:07:37.982964  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:37.983443  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:37.983475  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:37.983388  452804 retry.go:31] will retry after 261.868176ms: waiting for machine to come up
	I0819 19:07:38.247079  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:38.247579  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:38.247614  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:38.247531  452804 retry.go:31] will retry after 461.578514ms: waiting for machine to come up
	I0819 19:07:38.711258  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:38.711717  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:38.711748  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:38.711682  452804 retry.go:31] will retry after 459.351794ms: waiting for machine to come up
	I0819 19:07:39.172292  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:39.172698  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:39.172726  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:39.172651  452804 retry.go:31] will retry after 511.700799ms: waiting for machine to come up
	I0819 19:07:39.686535  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:39.686958  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:39.686991  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:39.686913  452804 retry.go:31] will retry after 731.052181ms: waiting for machine to come up
	I0819 19:07:40.419905  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:40.420410  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:40.420439  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:40.420350  452804 retry.go:31] will retry after 818.727574ms: waiting for machine to come up
	I0819 19:07:41.240939  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:41.241384  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:41.241410  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:41.241347  452804 retry.go:31] will retry after 1.138879364s: waiting for machine to come up
	I0819 19:07:42.382012  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:42.382402  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:42.382429  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:42.382370  452804 retry.go:31] will retry after 1.474683081s: waiting for machine to come up
	I0819 19:07:43.858547  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:43.859046  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:43.859077  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:43.858993  452804 retry.go:31] will retry after 1.583490461s: waiting for machine to come up
	I0819 19:07:45.444669  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:45.445085  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:45.445109  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:45.445037  452804 retry.go:31] will retry after 2.780886536s: waiting for machine to come up
	I0819 19:07:48.227136  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:48.227508  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:48.227544  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:48.227451  452804 retry.go:31] will retry after 3.081211101s: waiting for machine to come up
	I0819 19:07:51.310606  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:51.311119  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:51.311149  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:51.311040  452804 retry.go:31] will retry after 4.021238642s: waiting for machine to come up
	I0819 19:07:55.336313  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:55.336861  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:55.336892  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:55.336810  452804 retry.go:31] will retry after 4.178616831s: waiting for machine to come up
	I0819 19:07:59.519446  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.519840  452010 main.go:141] libmachine: (ha-163902-m03) Found IP for machine: 192.168.39.59
	I0819 19:07:59.519869  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has current primary IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.519880  452010 main.go:141] libmachine: (ha-163902-m03) Reserving static IP address...
	I0819 19:07:59.520257  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find host DHCP lease matching {name: "ha-163902-m03", mac: "52:54:00:64:e1:28", ip: "192.168.39.59"} in network mk-ha-163902
	I0819 19:07:59.602930  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Getting to WaitForSSH function...
	I0819 19:07:59.602963  452010 main.go:141] libmachine: (ha-163902-m03) Reserved static IP address: 192.168.39.59
	I0819 19:07:59.602977  452010 main.go:141] libmachine: (ha-163902-m03) Waiting for SSH to be available...
	I0819 19:07:59.605508  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.605880  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:minikube Clientid:01:52:54:00:64:e1:28}
	I0819 19:07:59.605912  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.606089  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Using SSH client type: external
	I0819 19:07:59.606124  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa (-rw-------)
	I0819 19:07:59.606186  452010 main.go:141] libmachine: (ha-163902-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:07:59.606211  452010 main.go:141] libmachine: (ha-163902-m03) DBG | About to run SSH command:
	I0819 19:07:59.606230  452010 main.go:141] libmachine: (ha-163902-m03) DBG | exit 0
	I0819 19:07:59.729073  452010 main.go:141] libmachine: (ha-163902-m03) DBG | SSH cmd err, output: <nil>: 
	I0819 19:07:59.729371  452010 main.go:141] libmachine: (ha-163902-m03) KVM machine creation complete!
	I0819 19:07:59.729687  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetConfigRaw
	I0819 19:07:59.730238  452010 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:07:59.730492  452010 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:07:59.730750  452010 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 19:07:59.730777  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetState
	I0819 19:07:59.732125  452010 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 19:07:59.732138  452010 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 19:07:59.732145  452010 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 19:07:59.732150  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:07:59.734706  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.735097  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:07:59.735132  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.735278  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:07:59.735490  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:07:59.735645  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:07:59.735786  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:07:59.735964  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:07:59.736214  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0819 19:07:59.736231  452010 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 19:07:59.836625  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:07:59.836650  452010 main.go:141] libmachine: Detecting the provisioner...
	I0819 19:07:59.836658  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:07:59.839611  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.840018  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:07:59.840041  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.840200  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:07:59.840468  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:07:59.840644  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:07:59.840787  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:07:59.840990  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:07:59.841246  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0819 19:07:59.841266  452010 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 19:07:59.941671  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 19:07:59.941744  452010 main.go:141] libmachine: found compatible host: buildroot
	I0819 19:07:59.941758  452010 main.go:141] libmachine: Provisioning with buildroot...
	I0819 19:07:59.941770  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetMachineName
	I0819 19:07:59.942040  452010 buildroot.go:166] provisioning hostname "ha-163902-m03"
	I0819 19:07:59.942064  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetMachineName
	I0819 19:07:59.942238  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:07:59.944835  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.945391  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:07:59.945424  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.945655  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:07:59.945903  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:07:59.946061  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:07:59.946259  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:07:59.946454  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:07:59.946678  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0819 19:07:59.946696  452010 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-163902-m03 && echo "ha-163902-m03" | sudo tee /etc/hostname
	I0819 19:08:00.059346  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-163902-m03
	
	I0819 19:08:00.059388  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:08:00.062867  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.063351  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:00.063386  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.063610  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:08:00.063854  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:00.064068  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:00.064291  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:08:00.064511  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:08:00.064741  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0819 19:08:00.064766  452010 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-163902-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-163902-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-163902-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:08:00.174242  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:08:00.174275  452010 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 19:08:00.174292  452010 buildroot.go:174] setting up certificates
	I0819 19:08:00.174303  452010 provision.go:84] configureAuth start
	I0819 19:08:00.174312  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetMachineName
	I0819 19:08:00.174651  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetIP
	I0819 19:08:00.177712  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.178225  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:00.178257  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.178434  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:08:00.181016  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.181423  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:00.181460  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.181697  452010 provision.go:143] copyHostCerts
	I0819 19:08:00.181736  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:08:00.181774  452010 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 19:08:00.181782  452010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:08:00.181847  452010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 19:08:00.181932  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:08:00.181954  452010 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 19:08:00.181959  452010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:08:00.181993  452010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 19:08:00.182058  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:08:00.182077  452010 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 19:08:00.182084  452010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:08:00.182111  452010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 19:08:00.182188  452010 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.ha-163902-m03 san=[127.0.0.1 192.168.39.59 ha-163902-m03 localhost minikube]
	I0819 19:08:00.339541  452010 provision.go:177] copyRemoteCerts
	I0819 19:08:00.339611  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:08:00.339642  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:08:00.342788  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.343151  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:00.343183  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.343382  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:08:00.343619  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:00.343811  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:08:00.343949  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa Username:docker}
	I0819 19:08:00.427994  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 19:08:00.428096  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:08:00.453164  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 19:08:00.453264  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 19:08:00.478133  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 19:08:00.478226  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 19:08:00.503354  452010 provision.go:87] duration metric: took 329.03716ms to configureAuth
	I0819 19:08:00.503389  452010 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:08:00.503592  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:08:00.503669  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:08:00.506412  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.506727  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:00.506761  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.506986  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:08:00.507176  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:00.507347  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:00.507478  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:08:00.507662  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:08:00.507842  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0819 19:08:00.507857  452010 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:08:00.766314  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:08:00.766349  452010 main.go:141] libmachine: Checking connection to Docker...
	I0819 19:08:00.766359  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetURL
	I0819 19:08:00.767607  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Using libvirt version 6000000
	I0819 19:08:00.769654  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.769940  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:00.769965  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.770114  452010 main.go:141] libmachine: Docker is up and running!
	I0819 19:08:00.770130  452010 main.go:141] libmachine: Reticulating splines...
	I0819 19:08:00.770139  452010 client.go:171] duration metric: took 24.661112457s to LocalClient.Create
	I0819 19:08:00.770168  452010 start.go:167] duration metric: took 24.661176781s to libmachine.API.Create "ha-163902"
	I0819 19:08:00.770181  452010 start.go:293] postStartSetup for "ha-163902-m03" (driver="kvm2")
	I0819 19:08:00.770194  452010 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:08:00.770251  452010 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:08:00.770522  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:08:00.770547  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:08:00.772714  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.773038  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:00.773063  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.773284  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:08:00.773490  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:00.773670  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:08:00.773823  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa Username:docker}
	I0819 19:08:00.855905  452010 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:08:00.860522  452010 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:08:00.860561  452010 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 19:08:00.860637  452010 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 19:08:00.860711  452010 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 19:08:00.860723  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /etc/ssl/certs/4381592.pem
	I0819 19:08:00.860806  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:08:00.870943  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:08:00.896463  452010 start.go:296] duration metric: took 126.241228ms for postStartSetup
	I0819 19:08:00.896533  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetConfigRaw
	I0819 19:08:00.897179  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetIP
	I0819 19:08:00.900265  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.900710  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:00.900740  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.901076  452010 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:08:00.901336  452010 start.go:128] duration metric: took 24.811490278s to createHost
	I0819 19:08:00.901363  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:08:00.904010  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.904443  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:00.904482  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.904708  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:08:00.904944  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:00.905158  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:00.905329  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:08:00.905516  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:08:00.905693  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0819 19:08:00.905705  452010 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:08:01.005651  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094480.983338337
	
	I0819 19:08:01.005682  452010 fix.go:216] guest clock: 1724094480.983338337
	I0819 19:08:01.005691  452010 fix.go:229] Guest: 2024-08-19 19:08:00.983338337 +0000 UTC Remote: 2024-08-19 19:08:00.90135049 +0000 UTC m=+149.520267210 (delta=81.987847ms)
	I0819 19:08:01.005713  452010 fix.go:200] guest clock delta is within tolerance: 81.987847ms
	I0819 19:08:01.005719  452010 start.go:83] releasing machines lock for "ha-163902-m03", held for 24.916039308s
	I0819 19:08:01.005738  452010 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:08:01.006030  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetIP
	I0819 19:08:01.008918  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:01.009375  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:01.009408  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:01.011766  452010 out.go:177] * Found network options:
	I0819 19:08:01.013204  452010 out.go:177]   - NO_PROXY=192.168.39.227,192.168.39.162
	W0819 19:08:01.014661  452010 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 19:08:01.014721  452010 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 19:08:01.014747  452010 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:08:01.015512  452010 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:08:01.015734  452010 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:08:01.015852  452010 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:08:01.015895  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	W0819 19:08:01.015907  452010 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 19:08:01.015930  452010 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 19:08:01.015999  452010 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:08:01.016019  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:08:01.018993  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:01.019218  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:01.019377  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:01.019409  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:01.019544  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:08:01.019630  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:01.019658  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:01.019780  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:01.019867  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:08:01.019946  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:08:01.020119  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:01.020121  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa Username:docker}
	I0819 19:08:01.020268  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:08:01.020406  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa Username:docker}
	I0819 19:08:01.256605  452010 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:08:01.262429  452010 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:08:01.262510  452010 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:08:01.279148  452010 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:08:01.279178  452010 start.go:495] detecting cgroup driver to use...
	I0819 19:08:01.279279  452010 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:08:01.295140  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:08:01.310453  452010 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:08:01.310548  452010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:08:01.325144  452010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:08:01.339258  452010 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:08:01.457252  452010 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:08:01.608271  452010 docker.go:233] disabling docker service ...
	I0819 19:08:01.608362  452010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:08:01.623400  452010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:08:01.636827  452010 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:08:01.763505  452010 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:08:01.888305  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:08:01.904137  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:08:01.925244  452010 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:08:01.925338  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:08:01.936413  452010 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:08:01.936497  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:08:01.947087  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:08:01.958019  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:08:01.968792  452010 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:08:01.979506  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:08:01.989753  452010 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:08:02.008365  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:08:02.018807  452010 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:08:02.028653  452010 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:08:02.028726  452010 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:08:02.041766  452010 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:08:02.051544  452010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:08:02.174077  452010 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:08:02.317778  452010 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:08:02.317862  452010 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:08:02.322416  452010 start.go:563] Will wait 60s for crictl version
	I0819 19:08:02.322484  452010 ssh_runner.go:195] Run: which crictl
	I0819 19:08:02.326570  452010 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:08:02.365977  452010 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:08:02.366079  452010 ssh_runner.go:195] Run: crio --version
	I0819 19:08:02.394133  452010 ssh_runner.go:195] Run: crio --version
	I0819 19:08:02.429162  452010 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:08:02.430494  452010 out.go:177]   - env NO_PROXY=192.168.39.227
	I0819 19:08:02.431834  452010 out.go:177]   - env NO_PROXY=192.168.39.227,192.168.39.162
	I0819 19:08:02.432993  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetIP
	I0819 19:08:02.435949  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:02.436345  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:02.436374  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:02.436663  452010 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 19:08:02.440966  452010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:08:02.454241  452010 mustload.go:65] Loading cluster: ha-163902
	I0819 19:08:02.454555  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:08:02.454969  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:02.455036  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:02.470591  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42385
	I0819 19:08:02.471041  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:02.471544  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:08:02.471567  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:02.471953  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:02.472185  452010 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:08:02.473914  452010 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:08:02.474219  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:02.474266  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:02.489232  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38515
	I0819 19:08:02.489782  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:02.490298  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:08:02.490325  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:02.490705  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:02.490980  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:08:02.491183  452010 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902 for IP: 192.168.39.59
	I0819 19:08:02.491198  452010 certs.go:194] generating shared ca certs ...
	I0819 19:08:02.491220  452010 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:08:02.491389  452010 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 19:08:02.491466  452010 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 19:08:02.491481  452010 certs.go:256] generating profile certs ...
	I0819 19:08:02.491571  452010 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.key
	I0819 19:08:02.491604  452010 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.6e37453b
	I0819 19:08:02.491619  452010 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.6e37453b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227 192.168.39.162 192.168.39.59 192.168.39.254]
	I0819 19:08:02.699925  452010 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.6e37453b ...
	I0819 19:08:02.699960  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.6e37453b: {Name:mkdb2ac70439b3fafaf57c897ab119c81d9f16b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:08:02.700137  452010 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.6e37453b ...
	I0819 19:08:02.700151  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.6e37453b: {Name:mkdb82289c5f550445a85b6895e8f4b5e0088fb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:08:02.700223  452010 certs.go:381] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.6e37453b -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt
	I0819 19:08:02.700358  452010 certs.go:385] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.6e37453b -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key
	I0819 19:08:02.700484  452010 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key
	I0819 19:08:02.700506  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 19:08:02.700524  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 19:08:02.700538  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 19:08:02.700553  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 19:08:02.700567  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 19:08:02.700589  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 19:08:02.700607  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 19:08:02.700621  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 19:08:02.700683  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 19:08:02.700726  452010 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 19:08:02.700740  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 19:08:02.700773  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:08:02.700805  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:08:02.700839  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 19:08:02.700896  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:08:02.700936  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:08:02.700957  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem -> /usr/share/ca-certificates/438159.pem
	I0819 19:08:02.700975  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /usr/share/ca-certificates/4381592.pem
	I0819 19:08:02.701021  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:08:02.704616  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:08:02.705072  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:08:02.705107  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:08:02.705303  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:08:02.705557  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:08:02.705744  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:08:02.705871  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:08:02.777624  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 19:08:02.782262  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 19:08:02.793290  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 19:08:02.797370  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0819 19:08:02.808750  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 19:08:02.813037  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 19:08:02.824062  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 19:08:02.829115  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 19:08:02.844911  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 19:08:02.849690  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 19:08:02.861549  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 19:08:02.865899  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 19:08:02.877670  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:08:02.902542  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:08:02.927413  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:08:02.952642  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 19:08:02.976866  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0819 19:08:03.001679  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:08:03.025547  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:08:03.050013  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:08:03.073976  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:08:03.098015  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 19:08:03.122121  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 19:08:03.146271  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 19:08:03.163290  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0819 19:08:03.179944  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 19:08:03.196597  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 19:08:03.214309  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 19:08:03.231234  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 19:08:03.248232  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 19:08:03.264984  452010 ssh_runner.go:195] Run: openssl version
	I0819 19:08:03.270688  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:08:03.281587  452010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:08:03.286125  452010 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:08:03.286220  452010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:08:03.291949  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:08:03.303301  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 19:08:03.314768  452010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 19:08:03.319802  452010 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 19:08:03.319875  452010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 19:08:03.326008  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 19:08:03.336795  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 19:08:03.347810  452010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 19:08:03.352483  452010 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 19:08:03.352572  452010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 19:08:03.358332  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:08:03.370380  452010 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:08:03.374810  452010 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 19:08:03.374889  452010 kubeadm.go:934] updating node {m03 192.168.39.59 8443 v1.31.0 crio true true} ...
	I0819 19:08:03.375006  452010 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-163902-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:08:03.375041  452010 kube-vip.go:115] generating kube-vip config ...
	I0819 19:08:03.375096  452010 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 19:08:03.390821  452010 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 19:08:03.390918  452010 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 19:08:03.390999  452010 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:08:03.401323  452010 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 19:08:03.401404  452010 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 19:08:03.413899  452010 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 19:08:03.413934  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 19:08:03.413935  452010 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0819 19:08:03.413940  452010 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0819 19:08:03.413955  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 19:08:03.414004  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:08:03.414015  452010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 19:08:03.414015  452010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 19:08:03.421878  452010 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 19:08:03.421923  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 19:08:03.449151  452010 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 19:08:03.449194  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 19:08:03.449253  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 19:08:03.449353  452010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 19:08:03.502848  452010 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 19:08:03.502899  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 19:08:04.290555  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 19:08:04.300247  452010 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0819 19:08:04.318384  452010 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:08:04.336121  452010 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 19:08:04.353588  452010 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 19:08:04.358137  452010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:08:04.371117  452010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:08:04.499688  452010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:08:04.516516  452010 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:08:04.517014  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:04.517065  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:04.533118  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40889
	I0819 19:08:04.533627  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:04.534203  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:08:04.534229  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:04.534580  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:04.534780  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:08:04.534971  452010 start.go:317] joinCluster: &{Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.59 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:08:04.535149  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 19:08:04.535171  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:08:04.538452  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:08:04.538860  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:08:04.538887  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:08:04.539128  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:08:04.539341  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:08:04.539540  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:08:04.539709  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:08:04.686782  452010 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.59 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:08:04.686841  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3w4elc.e4uij2tmkcoo2axg --discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-163902-m03 --control-plane --apiserver-advertise-address=192.168.39.59 --apiserver-bind-port=8443"
	I0819 19:08:25.234963  452010 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3w4elc.e4uij2tmkcoo2axg --discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-163902-m03 --control-plane --apiserver-advertise-address=192.168.39.59 --apiserver-bind-port=8443": (20.548098323s)
	I0819 19:08:25.235003  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 19:08:25.823383  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-163902-m03 minikube.k8s.io/updated_at=2024_08_19T19_08_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8 minikube.k8s.io/name=ha-163902 minikube.k8s.io/primary=false
	I0819 19:08:25.933989  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-163902-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 19:08:26.057761  452010 start.go:319] duration metric: took 21.522783925s to joinCluster
	I0819 19:08:26.057846  452010 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.59 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:08:26.058174  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:08:26.060509  452010 out.go:177] * Verifying Kubernetes components...
	I0819 19:08:26.061862  452010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:08:26.350042  452010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:08:26.381179  452010 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:08:26.381576  452010 kapi.go:59] client config for ha-163902: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.key", CAFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 19:08:26.381668  452010 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.227:8443
	I0819 19:08:26.381963  452010 node_ready.go:35] waiting up to 6m0s for node "ha-163902-m03" to be "Ready" ...
	I0819 19:08:26.382071  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:26.382081  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:26.382092  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:26.382100  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:26.385928  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:26.883195  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:26.883227  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:26.883239  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:26.883246  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:26.886826  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:27.382499  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:27.382538  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:27.382548  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:27.382552  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:27.387767  452010 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 19:08:27.882159  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:27.882184  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:27.882195  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:27.882201  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:27.885513  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:28.383264  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:28.383291  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:28.383302  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:28.383309  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:28.387040  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:28.387555  452010 node_ready.go:53] node "ha-163902-m03" has status "Ready":"False"
	I0819 19:08:28.883000  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:28.883028  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:28.883037  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:28.883041  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:28.885958  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:08:29.382390  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:29.382417  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:29.382428  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:29.382436  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:29.388004  452010 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 19:08:29.882436  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:29.882465  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:29.882477  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:29.882483  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:29.886513  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:08:30.382185  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:30.382210  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:30.382218  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:30.382222  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:30.385984  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:30.882401  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:30.882424  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:30.882434  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:30.882437  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:30.886101  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:30.886651  452010 node_ready.go:53] node "ha-163902-m03" has status "Ready":"False"
	I0819 19:08:31.383169  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:31.383197  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:31.383204  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:31.383208  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:31.392558  452010 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0819 19:08:31.882538  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:31.882567  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:31.882579  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:31.882584  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:31.886248  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:32.382331  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:32.382366  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:32.382380  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:32.382385  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:32.413073  452010 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0819 19:08:32.883122  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:32.883147  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:32.883155  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:32.883161  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:32.886449  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:32.887070  452010 node_ready.go:53] node "ha-163902-m03" has status "Ready":"False"
	I0819 19:08:33.382309  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:33.382336  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:33.382345  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:33.382349  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:33.387650  452010 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 19:08:33.882318  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:33.882340  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:33.882361  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:33.882374  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:33.885700  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:34.383092  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:34.383118  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:34.383127  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:34.383131  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:34.386669  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:34.883193  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:34.883225  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:34.883236  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:34.883245  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:34.886875  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:34.887594  452010 node_ready.go:53] node "ha-163902-m03" has status "Ready":"False"
	I0819 19:08:35.382883  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:35.382908  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:35.382919  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:35.382924  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:35.388306  452010 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 19:08:35.883136  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:35.883161  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:35.883172  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:35.883179  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:35.886966  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:36.382409  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:36.382439  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:36.382449  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:36.382454  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:36.386669  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:08:36.882948  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:36.882979  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:36.882991  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:36.882999  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:36.887423  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:08:36.888586  452010 node_ready.go:53] node "ha-163902-m03" has status "Ready":"False"
	I0819 19:08:37.382997  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:37.383023  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:37.383031  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:37.383036  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:37.388940  452010 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 19:08:37.882907  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:37.882932  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:37.882943  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:37.882949  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:37.886760  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:38.382378  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:38.382404  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:38.382412  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:38.382415  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:38.386159  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:38.882979  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:38.883002  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:38.883013  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:38.883016  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:38.886175  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:39.382642  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:39.382667  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:39.382678  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:39.382684  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:39.388121  452010 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 19:08:39.388749  452010 node_ready.go:53] node "ha-163902-m03" has status "Ready":"False"
	I0819 19:08:39.882457  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:39.882484  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:39.882496  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:39.882499  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:39.885976  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:40.383115  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:40.383146  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:40.383158  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:40.383165  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:40.386943  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:40.882982  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:40.883008  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:40.883017  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:40.883021  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:40.886577  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:41.382955  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:41.383040  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:41.383057  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:41.383067  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:41.393526  452010 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0819 19:08:41.394054  452010 node_ready.go:53] node "ha-163902-m03" has status "Ready":"False"
	I0819 19:08:41.882909  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:41.882934  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:41.882942  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:41.882946  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:41.886816  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:42.382558  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:42.382585  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:42.382593  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:42.382600  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:42.386198  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:42.883124  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:42.883150  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:42.883160  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:42.883165  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:42.887151  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:43.383139  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:43.383164  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.383172  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.383176  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.389641  452010 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 19:08:43.882399  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:43.882419  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.882431  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.882434  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.885368  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:08:43.885959  452010 node_ready.go:49] node "ha-163902-m03" has status "Ready":"True"
	I0819 19:08:43.885980  452010 node_ready.go:38] duration metric: took 17.503998711s for node "ha-163902-m03" to be "Ready" ...
	I0819 19:08:43.885989  452010 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:08:43.886052  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:08:43.886061  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.886068  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.886078  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.894167  452010 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0819 19:08:43.902800  452010 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nkths" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:43.902922  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-nkths
	I0819 19:08:43.902933  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.902945  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.902955  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.913178  452010 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0819 19:08:43.914107  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:43.914131  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.914140  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.914146  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.923550  452010 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0819 19:08:43.924109  452010 pod_ready.go:93] pod "coredns-6f6b679f8f-nkths" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:43.924137  452010 pod_ready.go:82] duration metric: took 21.302191ms for pod "coredns-6f6b679f8f-nkths" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:43.924152  452010 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-wmp8k" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:43.924241  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-wmp8k
	I0819 19:08:43.924252  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.924262  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.924268  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.935934  452010 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0819 19:08:43.936762  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:43.936790  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.936802  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.936810  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.942538  452010 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 19:08:43.943140  452010 pod_ready.go:93] pod "coredns-6f6b679f8f-wmp8k" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:43.943168  452010 pod_ready.go:82] duration metric: took 19.008048ms for pod "coredns-6f6b679f8f-wmp8k" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:43.943182  452010 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:43.943271  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-163902
	I0819 19:08:43.943281  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.943291  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.943298  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.955730  452010 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0819 19:08:43.956370  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:43.956390  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.956397  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.956413  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.964228  452010 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 19:08:43.964868  452010 pod_ready.go:93] pod "etcd-ha-163902" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:43.964892  452010 pod_ready.go:82] duration metric: took 21.699653ms for pod "etcd-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:43.964906  452010 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:43.964984  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-163902-m02
	I0819 19:08:43.964993  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.965000  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.965007  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.967866  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:08:43.968384  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:08:43.968400  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.968410  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.968417  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.971446  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:43.971896  452010 pod_ready.go:93] pod "etcd-ha-163902-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:43.971915  452010 pod_ready.go:82] duration metric: took 7.00153ms for pod "etcd-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:43.971926  452010 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-163902-m03" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:44.083312  452010 request.go:632] Waited for 111.279722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-163902-m03
	I0819 19:08:44.083379  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-163902-m03
	I0819 19:08:44.083384  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:44.083392  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:44.083403  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:44.087420  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:44.283432  452010 request.go:632] Waited for 195.380757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:44.283539  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:44.283548  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:44.283566  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:44.283578  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:44.286995  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:44.287527  452010 pod_ready.go:93] pod "etcd-ha-163902-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:44.287548  452010 pod_ready.go:82] duration metric: took 315.616421ms for pod "etcd-ha-163902-m03" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:44.287566  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:44.482809  452010 request.go:632] Waited for 195.156559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902
	I0819 19:08:44.482879  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902
	I0819 19:08:44.482885  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:44.482893  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:44.482898  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:44.486578  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:44.683016  452010 request.go:632] Waited for 195.47481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:44.683117  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:44.683128  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:44.683141  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:44.683151  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:44.687428  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:08:44.688462  452010 pod_ready.go:93] pod "kube-apiserver-ha-163902" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:44.688487  452010 pod_ready.go:82] duration metric: took 400.913719ms for pod "kube-apiserver-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:44.688500  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:44.882696  452010 request.go:632] Waited for 194.112883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902-m02
	I0819 19:08:44.882790  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902-m02
	I0819 19:08:44.882795  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:44.882803  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:44.882808  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:44.886290  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:45.083331  452010 request.go:632] Waited for 196.362296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:08:45.083426  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:08:45.083439  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:45.083448  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:45.083452  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:45.087138  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:45.087948  452010 pod_ready.go:93] pod "kube-apiserver-ha-163902-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:45.087967  452010 pod_ready.go:82] duration metric: took 399.459635ms for pod "kube-apiserver-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:45.087977  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-163902-m03" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:45.283166  452010 request.go:632] Waited for 195.099626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902-m03
	I0819 19:08:45.283231  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902-m03
	I0819 19:08:45.283256  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:45.283266  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:45.283271  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:45.287363  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:08:45.482561  452010 request.go:632] Waited for 194.317595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:45.482642  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:45.482649  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:45.482660  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:45.482666  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:45.486243  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:45.486936  452010 pod_ready.go:93] pod "kube-apiserver-ha-163902-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:45.486958  452010 pod_ready.go:82] duration metric: took 398.972984ms for pod "kube-apiserver-ha-163902-m03" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:45.486974  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:45.683031  452010 request.go:632] Waited for 195.974322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902
	I0819 19:08:45.683107  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902
	I0819 19:08:45.683115  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:45.683122  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:45.683126  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:45.686806  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:45.883260  452010 request.go:632] Waited for 195.449245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:45.883331  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:45.883338  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:45.883351  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:45.883361  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:45.886660  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:45.887178  452010 pod_ready.go:93] pod "kube-controller-manager-ha-163902" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:45.887205  452010 pod_ready.go:82] duration metric: took 400.222232ms for pod "kube-controller-manager-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:45.887220  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:46.083315  452010 request.go:632] Waited for 196.007764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902-m02
	I0819 19:08:46.083413  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902-m02
	I0819 19:08:46.083424  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:46.083435  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:46.083441  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:46.086684  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:46.282594  452010 request.go:632] Waited for 195.289033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:08:46.282660  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:08:46.282665  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:46.282675  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:46.282682  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:46.286132  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:46.286819  452010 pod_ready.go:93] pod "kube-controller-manager-ha-163902-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:46.286838  452010 pod_ready.go:82] duration metric: took 399.610376ms for pod "kube-controller-manager-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:46.286849  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-163902-m03" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:46.482894  452010 request.go:632] Waited for 195.946883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902-m03
	I0819 19:08:46.482973  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902-m03
	I0819 19:08:46.482979  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:46.482987  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:46.482993  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:46.486332  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:46.683439  452010 request.go:632] Waited for 196.274914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:46.683519  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:46.683527  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:46.683534  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:46.683551  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:46.687351  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:46.687932  452010 pod_ready.go:93] pod "kube-controller-manager-ha-163902-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:46.687955  452010 pod_ready.go:82] duration metric: took 401.098178ms for pod "kube-controller-manager-ha-163902-m03" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:46.687970  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4whvs" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:46.883141  452010 request.go:632] Waited for 195.089336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4whvs
	I0819 19:08:46.883241  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4whvs
	I0819 19:08:46.883252  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:46.883264  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:46.883271  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:46.886467  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:47.083466  452010 request.go:632] Waited for 196.390904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:08:47.083540  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:08:47.083545  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:47.083553  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:47.083557  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:47.086642  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:47.087156  452010 pod_ready.go:93] pod "kube-proxy-4whvs" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:47.087181  452010 pod_ready.go:82] duration metric: took 399.202246ms for pod "kube-proxy-4whvs" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:47.087194  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wxrsv" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:47.283324  452010 request.go:632] Waited for 196.023466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxrsv
	I0819 19:08:47.283401  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxrsv
	I0819 19:08:47.283409  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:47.283420  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:47.283426  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:47.286972  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:47.483030  452010 request.go:632] Waited for 195.414631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:47.483097  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:47.483102  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:47.483110  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:47.483115  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:47.486659  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:47.487234  452010 pod_ready.go:93] pod "kube-proxy-wxrsv" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:47.487258  452010 pod_ready.go:82] duration metric: took 400.05675ms for pod "kube-proxy-wxrsv" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:47.487273  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xq852" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:47.682816  452010 request.go:632] Waited for 195.458544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xq852
	I0819 19:08:47.682896  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xq852
	I0819 19:08:47.682904  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:47.682913  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:47.682920  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:47.686683  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:47.883407  452010 request.go:632] Waited for 196.145004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:47.883489  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:47.883506  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:47.883535  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:47.883543  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:47.886789  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:47.887318  452010 pod_ready.go:93] pod "kube-proxy-xq852" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:47.887338  452010 pod_ready.go:82] duration metric: took 400.057272ms for pod "kube-proxy-xq852" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:47.887351  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:48.082420  452010 request.go:632] Waited for 194.983477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902
	I0819 19:08:48.082496  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902
	I0819 19:08:48.082501  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:48.082508  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:48.082512  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:48.085794  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:48.282892  452010 request.go:632] Waited for 196.439767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:48.282964  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:48.282980  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:48.282989  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:48.282996  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:48.286338  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:48.287017  452010 pod_ready.go:93] pod "kube-scheduler-ha-163902" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:48.287038  452010 pod_ready.go:82] duration metric: took 399.679568ms for pod "kube-scheduler-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:48.287049  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:48.483208  452010 request.go:632] Waited for 196.075203ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902-m02
	I0819 19:08:48.483326  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902-m02
	I0819 19:08:48.483338  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:48.483348  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:48.483357  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:48.487579  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:08:48.682566  452010 request.go:632] Waited for 194.284217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:08:48.682653  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:08:48.682664  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:48.682674  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:48.682681  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:48.686127  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:48.686638  452010 pod_ready.go:93] pod "kube-scheduler-ha-163902-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:48.686667  452010 pod_ready.go:82] duration metric: took 399.610522ms for pod "kube-scheduler-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:48.686682  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-163902-m03" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:48.882743  452010 request.go:632] Waited for 195.96599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902-m03
	I0819 19:08:48.882809  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902-m03
	I0819 19:08:48.882816  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:48.882824  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:48.882829  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:48.886603  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:49.082544  452010 request.go:632] Waited for 195.332113ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:49.082624  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:49.082632  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:49.082645  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:49.082655  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:49.086117  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:49.086515  452010 pod_ready.go:93] pod "kube-scheduler-ha-163902-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:49.086534  452010 pod_ready.go:82] duration metric: took 399.843776ms for pod "kube-scheduler-ha-163902-m03" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:49.086547  452010 pod_ready.go:39] duration metric: took 5.200548968s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:08:49.086566  452010 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:08:49.086627  452010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:08:49.101315  452010 api_server.go:72] duration metric: took 23.043421745s to wait for apiserver process to appear ...
	I0819 19:08:49.101354  452010 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:08:49.101378  452010 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0819 19:08:49.107203  452010 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0819 19:08:49.107304  452010 round_trippers.go:463] GET https://192.168.39.227:8443/version
	I0819 19:08:49.107314  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:49.107325  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:49.107331  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:49.108796  452010 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 19:08:49.108897  452010 api_server.go:141] control plane version: v1.31.0
	I0819 19:08:49.108922  452010 api_server.go:131] duration metric: took 7.558305ms to wait for apiserver health ...
	I0819 19:08:49.108931  452010 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:08:49.283386  452010 request.go:632] Waited for 174.348677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:08:49.283451  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:08:49.283456  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:49.283464  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:49.283469  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:49.290726  452010 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 19:08:49.297300  452010 system_pods.go:59] 24 kube-system pods found
	I0819 19:08:49.297339  452010 system_pods.go:61] "coredns-6f6b679f8f-nkths" [b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2] Running
	I0819 19:08:49.297346  452010 system_pods.go:61] "coredns-6f6b679f8f-wmp8k" [ca2ee9c4-992a-4251-a717-9843b7b41894] Running
	I0819 19:08:49.297351  452010 system_pods.go:61] "etcd-ha-163902" [fc3d22fd-d8b3-45ca-a6f6-9898285f85d4] Running
	I0819 19:08:49.297355  452010 system_pods.go:61] "etcd-ha-163902-m02" [f05c7c5e-de7b-4963-b897-204a95440e0d] Running
	I0819 19:08:49.297360  452010 system_pods.go:61] "etcd-ha-163902-m03" [596e35eb-102b-4a4f-8e3f-807b940a4bc6] Running
	I0819 19:08:49.297364  452010 system_pods.go:61] "kindnet-72q7r" [d376a785-a08b-4d53-bc5e-02425901c947] Running
	I0819 19:08:49.297369  452010 system_pods.go:61] "kindnet-97cnn" [75284cfb-20b5-4675-b53e-db4130cc6722] Running
	I0819 19:08:49.297373  452010 system_pods.go:61] "kindnet-bpwjl" [624275c2-a670-4cc0-a11c-70f3e1b78946] Running
	I0819 19:08:49.297378  452010 system_pods.go:61] "kube-apiserver-ha-163902" [ab456c18-e3cf-462b-8028-55ddf9b36306] Running
	I0819 19:08:49.297383  452010 system_pods.go:61] "kube-apiserver-ha-163902-m02" [f11ec84f-77aa-4003-a786-f6c54b4c14bd] Running
	I0819 19:08:49.297387  452010 system_pods.go:61] "kube-apiserver-ha-163902-m03" [977eaba2-9cd2-42e2-83a4-f973bdebbf2b] Running
	I0819 19:08:49.297392  452010 system_pods.go:61] "kube-controller-manager-ha-163902" [efeafbc7-7719-4632-9215-fd3c3ca09cc5] Running
	I0819 19:08:49.297397  452010 system_pods.go:61] "kube-controller-manager-ha-163902-m02" [2d80759a-8137-4e3c-a621-ba92532c9d9b] Running
	I0819 19:08:49.297405  452010 system_pods.go:61] "kube-controller-manager-ha-163902-m03" [470c09f7-df81-4a14-9cbf-71b73a570c48] Running
	I0819 19:08:49.297410  452010 system_pods.go:61] "kube-proxy-4whvs" [b241f2e5-d83c-432b-bdce-c26940efa096] Running
	I0819 19:08:49.297417  452010 system_pods.go:61] "kube-proxy-wxrsv" [3d78c5e8-eed2-4da5-9425-76f96e2d8ed6] Running
	I0819 19:08:49.297424  452010 system_pods.go:61] "kube-proxy-xq852" [f9dee0f1-ada2-4cb4-8734-c2a3456c6d37] Running
	I0819 19:08:49.297431  452010 system_pods.go:61] "kube-scheduler-ha-163902" [ee9d8a96-56d9-4bc2-9829-bdf4a0ec0749] Running
	I0819 19:08:49.297437  452010 system_pods.go:61] "kube-scheduler-ha-163902-m02" [90a559e2-5c2a-4db8-b9d4-21362177d686] Running
	I0819 19:08:49.297443  452010 system_pods.go:61] "kube-scheduler-ha-163902-m03" [dc50d60c-4da1-4279-bd7a-bf1d9486d7ad] Running
	I0819 19:08:49.297449  452010 system_pods.go:61] "kube-vip-ha-163902" [3143571f-0eb1-4f9a-ada5-27fda4724d18] Running
	I0819 19:08:49.297455  452010 system_pods.go:61] "kube-vip-ha-163902-m02" [ccd47f0d-1f18-4146-808f-5436d25c859f] Running
	I0819 19:08:49.297460  452010 system_pods.go:61] "kube-vip-ha-163902-m03" [6f2b8b81-6d0d-4baa-9818-890c639a811c] Running
	I0819 19:08:49.297466  452010 system_pods.go:61] "storage-provisioner" [05dffa5a-3372-4a79-94ad-33d14a4b7fd0] Running
	I0819 19:08:49.297481  452010 system_pods.go:74] duration metric: took 188.54165ms to wait for pod list to return data ...
	I0819 19:08:49.297494  452010 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:08:49.482952  452010 request.go:632] Waited for 185.365607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/default/serviceaccounts
	I0819 19:08:49.483026  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/default/serviceaccounts
	I0819 19:08:49.483031  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:49.483039  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:49.483045  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:49.486489  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:49.486648  452010 default_sa.go:45] found service account: "default"
	I0819 19:08:49.486674  452010 default_sa.go:55] duration metric: took 189.169834ms for default service account to be created ...
	I0819 19:08:49.486687  452010 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:08:49.683399  452010 request.go:632] Waited for 196.582808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:08:49.683547  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:08:49.683560  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:49.683570  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:49.683577  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:49.689251  452010 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 19:08:49.695863  452010 system_pods.go:86] 24 kube-system pods found
	I0819 19:08:49.695904  452010 system_pods.go:89] "coredns-6f6b679f8f-nkths" [b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2] Running
	I0819 19:08:49.695913  452010 system_pods.go:89] "coredns-6f6b679f8f-wmp8k" [ca2ee9c4-992a-4251-a717-9843b7b41894] Running
	I0819 19:08:49.695920  452010 system_pods.go:89] "etcd-ha-163902" [fc3d22fd-d8b3-45ca-a6f6-9898285f85d4] Running
	I0819 19:08:49.695926  452010 system_pods.go:89] "etcd-ha-163902-m02" [f05c7c5e-de7b-4963-b897-204a95440e0d] Running
	I0819 19:08:49.695931  452010 system_pods.go:89] "etcd-ha-163902-m03" [596e35eb-102b-4a4f-8e3f-807b940a4bc6] Running
	I0819 19:08:49.695935  452010 system_pods.go:89] "kindnet-72q7r" [d376a785-a08b-4d53-bc5e-02425901c947] Running
	I0819 19:08:49.695940  452010 system_pods.go:89] "kindnet-97cnn" [75284cfb-20b5-4675-b53e-db4130cc6722] Running
	I0819 19:08:49.695946  452010 system_pods.go:89] "kindnet-bpwjl" [624275c2-a670-4cc0-a11c-70f3e1b78946] Running
	I0819 19:08:49.695951  452010 system_pods.go:89] "kube-apiserver-ha-163902" [ab456c18-e3cf-462b-8028-55ddf9b36306] Running
	I0819 19:08:49.695957  452010 system_pods.go:89] "kube-apiserver-ha-163902-m02" [f11ec84f-77aa-4003-a786-f6c54b4c14bd] Running
	I0819 19:08:49.695962  452010 system_pods.go:89] "kube-apiserver-ha-163902-m03" [977eaba2-9cd2-42e2-83a4-f973bdebbf2b] Running
	I0819 19:08:49.695968  452010 system_pods.go:89] "kube-controller-manager-ha-163902" [efeafbc7-7719-4632-9215-fd3c3ca09cc5] Running
	I0819 19:08:49.695976  452010 system_pods.go:89] "kube-controller-manager-ha-163902-m02" [2d80759a-8137-4e3c-a621-ba92532c9d9b] Running
	I0819 19:08:49.695982  452010 system_pods.go:89] "kube-controller-manager-ha-163902-m03" [470c09f7-df81-4a14-9cbf-71b73a570c48] Running
	I0819 19:08:49.695988  452010 system_pods.go:89] "kube-proxy-4whvs" [b241f2e5-d83c-432b-bdce-c26940efa096] Running
	I0819 19:08:49.695993  452010 system_pods.go:89] "kube-proxy-wxrsv" [3d78c5e8-eed2-4da5-9425-76f96e2d8ed6] Running
	I0819 19:08:49.695999  452010 system_pods.go:89] "kube-proxy-xq852" [f9dee0f1-ada2-4cb4-8734-c2a3456c6d37] Running
	I0819 19:08:49.696004  452010 system_pods.go:89] "kube-scheduler-ha-163902" [ee9d8a96-56d9-4bc2-9829-bdf4a0ec0749] Running
	I0819 19:08:49.696012  452010 system_pods.go:89] "kube-scheduler-ha-163902-m02" [90a559e2-5c2a-4db8-b9d4-21362177d686] Running
	I0819 19:08:49.696018  452010 system_pods.go:89] "kube-scheduler-ha-163902-m03" [dc50d60c-4da1-4279-bd7a-bf1d9486d7ad] Running
	I0819 19:08:49.696026  452010 system_pods.go:89] "kube-vip-ha-163902" [3143571f-0eb1-4f9a-ada5-27fda4724d18] Running
	I0819 19:08:49.696033  452010 system_pods.go:89] "kube-vip-ha-163902-m02" [ccd47f0d-1f18-4146-808f-5436d25c859f] Running
	I0819 19:08:49.696037  452010 system_pods.go:89] "kube-vip-ha-163902-m03" [6f2b8b81-6d0d-4baa-9818-890c639a811c] Running
	I0819 19:08:49.696042  452010 system_pods.go:89] "storage-provisioner" [05dffa5a-3372-4a79-94ad-33d14a4b7fd0] Running
	I0819 19:08:49.696056  452010 system_pods.go:126] duration metric: took 209.3598ms to wait for k8s-apps to be running ...
	I0819 19:08:49.696068  452010 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:08:49.696130  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:08:49.712004  452010 system_svc.go:56] duration metric: took 15.928391ms WaitForService to wait for kubelet
	I0819 19:08:49.712039  452010 kubeadm.go:582] duration metric: took 23.654153488s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:08:49.712066  452010 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:08:49.882471  452010 request.go:632] Waited for 170.308096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes
	I0819 19:08:49.882533  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes
	I0819 19:08:49.882538  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:49.882546  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:49.882551  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:49.886320  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:49.887270  452010 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:08:49.887291  452010 node_conditions.go:123] node cpu capacity is 2
	I0819 19:08:49.887316  452010 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:08:49.887321  452010 node_conditions.go:123] node cpu capacity is 2
	I0819 19:08:49.887326  452010 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:08:49.887330  452010 node_conditions.go:123] node cpu capacity is 2
	I0819 19:08:49.887337  452010 node_conditions.go:105] duration metric: took 175.264878ms to run NodePressure ...
	I0819 19:08:49.887355  452010 start.go:241] waiting for startup goroutines ...
	I0819 19:08:49.887386  452010 start.go:255] writing updated cluster config ...
	I0819 19:08:49.887710  452010 ssh_runner.go:195] Run: rm -f paused
	I0819 19:08:49.942523  452010 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:08:49.944601  452010 out.go:177] * Done! kubectl is now configured to use "ha-163902" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.166860371Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094747166836829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9c78c39-429f-4a5c-b535-3580570d44a4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.167420155Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a74fa2bf-af62-4878-9b7f-b2bc51a1b1c0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.167532385Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a74fa2bf-af62-4878-9b7f-b2bc51a1b1c0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.167781595Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02444059f768b434bec5a2fd4b0d4fbb732968f540047c4aa7b8a99a1c65bb7d,PodSandboxId:eb7a960ca621f28de1cf7df7ee989ec445634c3424342a6e021968e7cdf42a07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724094532962212349,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:259a75894a0e7bb2cdc9cd2b3363c853666a4c083456ff5d18e290b19f68e62d,PodSandboxId:bdf5c98989b4e84de8bdcad2859c43a2b3b7ceae07202ccadbf4b38936e48ad9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094393408241465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5,PodSandboxId:ccb6b229e5b0f48fa137a962df49ea8e4097412f9c8e6af3d688045b8f853f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094393388137670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816,PodSandboxId:17befe587bdb85618dca3e7976f660e17302ba9f247c3ccf2ac0ac81f4a9b659,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094393363252432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-99
2a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2,PodSandboxId:10a016c587c22e7d86b0df8374fb970bc55d97e30756c8f1221e21e974ef8796,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724094381626725267,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd,PodSandboxId:5f1f61689816115acade7df9764d44551ec4f4c0166b634e72db9ca6205df7f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172409437
8012677148,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f34db6fe664bceaf0e9d708d1611192c9077289166c5e6eb41e36c67d759f40,PodSandboxId:4fda63b31ef3bc66be8d33599e8028219f87aa47cd5edfb426a55be8aa6ac82c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172409436992
7290186,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3706fbd25af952ee911ba6f754e5bd73,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca,PodSandboxId:644e4a4ea97f11366a62fe5805da67ba956c5f041ae16a8cd9bbb066ce4e0622,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094367193456047,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63a9dbc3e9af7e2076b8b04c548b2a1cb5c385838357a95e7bc2a86709324ae5,PodSandboxId:d699d79418f1a46c4a408f0cc05417b89be740947f3265fba2de1f95ca70c025,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094367148223106,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fca5e9aea9309a38211d91fdbe50e67041a24d8dc547a7cff8edefeb4c57ae6,PodSandboxId:0b872309e95e526a6a3816ecc3576206a83295af6490713eb8431995e4de4605,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094367120557759,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732,PodSandboxId:8f73fd805b78d3b87e7ca93e3d523d088e6dd941a647e215524cab0b016ee42e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094367104625636,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a74fa2bf-af62-4878-9b7f-b2bc51a1b1c0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.210220404Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26cfb97b-ad41-49ce-b491-3e0777904f46 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.210315549Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26cfb97b-ad41-49ce-b491-3e0777904f46 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.211538485Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc94c3b3-ea47-44c9-a364-81cc1022e5c7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.212004269Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094747211979059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc94c3b3-ea47-44c9-a364-81cc1022e5c7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.212917289Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6996e5e7-085a-4bd2-8c33-7727f634b5f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.212988760Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6996e5e7-085a-4bd2-8c33-7727f634b5f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.213283914Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02444059f768b434bec5a2fd4b0d4fbb732968f540047c4aa7b8a99a1c65bb7d,PodSandboxId:eb7a960ca621f28de1cf7df7ee989ec445634c3424342a6e021968e7cdf42a07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724094532962212349,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:259a75894a0e7bb2cdc9cd2b3363c853666a4c083456ff5d18e290b19f68e62d,PodSandboxId:bdf5c98989b4e84de8bdcad2859c43a2b3b7ceae07202ccadbf4b38936e48ad9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094393408241465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5,PodSandboxId:ccb6b229e5b0f48fa137a962df49ea8e4097412f9c8e6af3d688045b8f853f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094393388137670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816,PodSandboxId:17befe587bdb85618dca3e7976f660e17302ba9f247c3ccf2ac0ac81f4a9b659,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094393363252432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-99
2a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2,PodSandboxId:10a016c587c22e7d86b0df8374fb970bc55d97e30756c8f1221e21e974ef8796,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724094381626725267,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd,PodSandboxId:5f1f61689816115acade7df9764d44551ec4f4c0166b634e72db9ca6205df7f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172409437
8012677148,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f34db6fe664bceaf0e9d708d1611192c9077289166c5e6eb41e36c67d759f40,PodSandboxId:4fda63b31ef3bc66be8d33599e8028219f87aa47cd5edfb426a55be8aa6ac82c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172409436992
7290186,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3706fbd25af952ee911ba6f754e5bd73,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca,PodSandboxId:644e4a4ea97f11366a62fe5805da67ba956c5f041ae16a8cd9bbb066ce4e0622,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094367193456047,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63a9dbc3e9af7e2076b8b04c548b2a1cb5c385838357a95e7bc2a86709324ae5,PodSandboxId:d699d79418f1a46c4a408f0cc05417b89be740947f3265fba2de1f95ca70c025,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094367148223106,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fca5e9aea9309a38211d91fdbe50e67041a24d8dc547a7cff8edefeb4c57ae6,PodSandboxId:0b872309e95e526a6a3816ecc3576206a83295af6490713eb8431995e4de4605,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094367120557759,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732,PodSandboxId:8f73fd805b78d3b87e7ca93e3d523d088e6dd941a647e215524cab0b016ee42e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094367104625636,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6996e5e7-085a-4bd2-8c33-7727f634b5f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.252245888Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf3e8096-56ae-4d09-9d74-ef51eacdfaac name=/runtime.v1.RuntimeService/Version
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.252340093Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf3e8096-56ae-4d09-9d74-ef51eacdfaac name=/runtime.v1.RuntimeService/Version
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.253309905Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=940517b9-5f37-4796-8ffd-2c4c73e13042 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.253747819Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094747253727621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=940517b9-5f37-4796-8ffd-2c4c73e13042 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.254259729Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b79d80e3-bcfe-4286-8d89-6a2d1a0dd563 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.254332364Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b79d80e3-bcfe-4286-8d89-6a2d1a0dd563 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.254576525Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02444059f768b434bec5a2fd4b0d4fbb732968f540047c4aa7b8a99a1c65bb7d,PodSandboxId:eb7a960ca621f28de1cf7df7ee989ec445634c3424342a6e021968e7cdf42a07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724094532962212349,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:259a75894a0e7bb2cdc9cd2b3363c853666a4c083456ff5d18e290b19f68e62d,PodSandboxId:bdf5c98989b4e84de8bdcad2859c43a2b3b7ceae07202ccadbf4b38936e48ad9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094393408241465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5,PodSandboxId:ccb6b229e5b0f48fa137a962df49ea8e4097412f9c8e6af3d688045b8f853f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094393388137670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816,PodSandboxId:17befe587bdb85618dca3e7976f660e17302ba9f247c3ccf2ac0ac81f4a9b659,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094393363252432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-99
2a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2,PodSandboxId:10a016c587c22e7d86b0df8374fb970bc55d97e30756c8f1221e21e974ef8796,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724094381626725267,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd,PodSandboxId:5f1f61689816115acade7df9764d44551ec4f4c0166b634e72db9ca6205df7f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172409437
8012677148,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f34db6fe664bceaf0e9d708d1611192c9077289166c5e6eb41e36c67d759f40,PodSandboxId:4fda63b31ef3bc66be8d33599e8028219f87aa47cd5edfb426a55be8aa6ac82c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172409436992
7290186,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3706fbd25af952ee911ba6f754e5bd73,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca,PodSandboxId:644e4a4ea97f11366a62fe5805da67ba956c5f041ae16a8cd9bbb066ce4e0622,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094367193456047,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63a9dbc3e9af7e2076b8b04c548b2a1cb5c385838357a95e7bc2a86709324ae5,PodSandboxId:d699d79418f1a46c4a408f0cc05417b89be740947f3265fba2de1f95ca70c025,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094367148223106,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fca5e9aea9309a38211d91fdbe50e67041a24d8dc547a7cff8edefeb4c57ae6,PodSandboxId:0b872309e95e526a6a3816ecc3576206a83295af6490713eb8431995e4de4605,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094367120557759,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732,PodSandboxId:8f73fd805b78d3b87e7ca93e3d523d088e6dd941a647e215524cab0b016ee42e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094367104625636,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b79d80e3-bcfe-4286-8d89-6a2d1a0dd563 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.292849182Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63761edd-9678-4d40-b14f-443f57b38872 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.292942519Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63761edd-9678-4d40-b14f-443f57b38872 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.293945377Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=31d787ff-39c3-4c20-b6c6-6e1c98667d47 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.294900169Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094747294874001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=31d787ff-39c3-4c20-b6c6-6e1c98667d47 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.295610893Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6b4b2e8-bd51-4c78-a50e-49daa715a914 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.295826208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6b4b2e8-bd51-4c78-a50e-49daa715a914 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:12:27 ha-163902 crio[680]: time="2024-08-19 19:12:27.296116626Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02444059f768b434bec5a2fd4b0d4fbb732968f540047c4aa7b8a99a1c65bb7d,PodSandboxId:eb7a960ca621f28de1cf7df7ee989ec445634c3424342a6e021968e7cdf42a07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724094532962212349,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:259a75894a0e7bb2cdc9cd2b3363c853666a4c083456ff5d18e290b19f68e62d,PodSandboxId:bdf5c98989b4e84de8bdcad2859c43a2b3b7ceae07202ccadbf4b38936e48ad9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094393408241465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5,PodSandboxId:ccb6b229e5b0f48fa137a962df49ea8e4097412f9c8e6af3d688045b8f853f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094393388137670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816,PodSandboxId:17befe587bdb85618dca3e7976f660e17302ba9f247c3ccf2ac0ac81f4a9b659,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094393363252432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-99
2a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2,PodSandboxId:10a016c587c22e7d86b0df8374fb970bc55d97e30756c8f1221e21e974ef8796,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724094381626725267,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd,PodSandboxId:5f1f61689816115acade7df9764d44551ec4f4c0166b634e72db9ca6205df7f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172409437
8012677148,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f34db6fe664bceaf0e9d708d1611192c9077289166c5e6eb41e36c67d759f40,PodSandboxId:4fda63b31ef3bc66be8d33599e8028219f87aa47cd5edfb426a55be8aa6ac82c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172409436992
7290186,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3706fbd25af952ee911ba6f754e5bd73,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca,PodSandboxId:644e4a4ea97f11366a62fe5805da67ba956c5f041ae16a8cd9bbb066ce4e0622,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094367193456047,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63a9dbc3e9af7e2076b8b04c548b2a1cb5c385838357a95e7bc2a86709324ae5,PodSandboxId:d699d79418f1a46c4a408f0cc05417b89be740947f3265fba2de1f95ca70c025,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094367148223106,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fca5e9aea9309a38211d91fdbe50e67041a24d8dc547a7cff8edefeb4c57ae6,PodSandboxId:0b872309e95e526a6a3816ecc3576206a83295af6490713eb8431995e4de4605,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094367120557759,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732,PodSandboxId:8f73fd805b78d3b87e7ca93e3d523d088e6dd941a647e215524cab0b016ee42e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094367104625636,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6b4b2e8-bd51-4c78-a50e-49daa715a914 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	02444059f768b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   eb7a960ca621f       busybox-7dff88458-vlrsr
	259a75894a0e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   bdf5c98989b4e       storage-provisioner
	920809b3fb8b5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   ccb6b229e5b0f       coredns-6f6b679f8f-nkths
	e3292ee2a24df       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   17befe587bdb8       coredns-6f6b679f8f-wmp8k
	2bde6d659e1cd       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    6 minutes ago       Running             kindnet-cni               0                   10a016c587c22       kindnet-bpwjl
	db4dd64341a0f       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   5f1f616898161       kube-proxy-wxrsv
	4f34db6fe664b       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   4fda63b31ef3b       kube-vip-ha-163902
	4b31ffd467824       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      6 minutes ago       Running             kube-scheduler            0                   644e4a4ea97f1       kube-scheduler-ha-163902
	63a9dbc3e9af7       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      6 minutes ago       Running             kube-controller-manager   0                   d699d79418f1a       kube-controller-manager-ha-163902
	8fca5e9aea930       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      6 minutes ago       Running             kube-apiserver            0                   0b872309e95e5       kube-apiserver-ha-163902
	d7785bd28970f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   8f73fd805b78d       etcd-ha-163902
	
	
	==> coredns [920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5] <==
	[INFO] 10.244.2.2:39433 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000348752s
	[INFO] 10.244.2.2:47564 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018325s
	[INFO] 10.244.2.2:49967 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003381326s
	[INFO] 10.244.2.2:33626 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000258809s
	[INFO] 10.244.1.2:51524 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001508794s
	[INFO] 10.244.1.2:44203 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105366s
	[INFO] 10.244.1.2:39145 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000196935s
	[INFO] 10.244.1.2:53804 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000174817s
	[INFO] 10.244.0.4:38242 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152582s
	[INFO] 10.244.0.4:50866 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00178155s
	[INFO] 10.244.0.4:41459 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077648s
	[INFO] 10.244.0.4:52991 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001294022s
	[INFO] 10.244.0.4:49760 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077772s
	[INFO] 10.244.2.2:52036 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000184006s
	[INFO] 10.244.2.2:42639 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000139597s
	[INFO] 10.244.1.2:45707 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157857s
	[INFO] 10.244.1.2:55541 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079589s
	[INFO] 10.244.0.4:39107 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114365s
	[INFO] 10.244.0.4:42814 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075113s
	[INFO] 10.244.1.2:45907 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164052s
	[INFO] 10.244.1.2:50977 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168617s
	[INFO] 10.244.1.2:55449 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000213337s
	[INFO] 10.244.1.2:36556 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110937s
	[INFO] 10.244.0.4:58486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000301321s
	[INFO] 10.244.0.4:59114 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075318s
	
	
	==> coredns [e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816] <==
	[INFO] 10.244.1.2:49834 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.00173877s
	[INFO] 10.244.1.2:53299 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001633058s
	[INFO] 10.244.0.4:37265 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248064s
	[INFO] 10.244.2.2:37997 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.018569099s
	[INFO] 10.244.2.2:39006 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148509s
	[INFO] 10.244.2.2:49793 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131124s
	[INFO] 10.244.1.2:35247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129697s
	[INFO] 10.244.1.2:51995 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004964244s
	[INFO] 10.244.1.2:49029 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139842s
	[INFO] 10.244.1.2:37017 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012537s
	[INFO] 10.244.0.4:60699 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057628s
	[INFO] 10.244.0.4:57923 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000112473s
	[INFO] 10.244.0.4:51503 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082503s
	[INFO] 10.244.2.2:34426 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121035s
	[INFO] 10.244.2.2:59490 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095139s
	[INFO] 10.244.1.2:50323 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124167s
	[INFO] 10.244.1.2:46467 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102348s
	[INFO] 10.244.0.4:41765 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001163s
	[INFO] 10.244.0.4:34540 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057148s
	[INFO] 10.244.2.2:54418 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140774s
	[INFO] 10.244.2.2:59184 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158893s
	[INFO] 10.244.2.2:53883 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149814s
	[INFO] 10.244.2.2:35674 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136715s
	[INFO] 10.244.0.4:42875 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138512s
	[INFO] 10.244.0.4:58237 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102142s
	
	
	==> describe nodes <==
	Name:               ha-163902
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-163902
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=ha-163902
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T19_06_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:06:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-163902
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:12:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:09:17 +0000   Mon, 19 Aug 2024 19:06:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:09:17 +0000   Mon, 19 Aug 2024 19:06:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:09:17 +0000   Mon, 19 Aug 2024 19:06:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:09:17 +0000   Mon, 19 Aug 2024 19:06:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-163902
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d3b52f7c3a144ec8d3a6e98276775f3
	  System UUID:                4d3b52f7-c3a1-44ec-8d3a-6e98276775f3
	  Boot ID:                    26bff1c8-7a07-4ad4-9634-fcbc547b5a26
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vlrsr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 coredns-6f6b679f8f-nkths             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m10s
	  kube-system                 coredns-6f6b679f8f-wmp8k             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m10s
	  kube-system                 etcd-ha-163902                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m16s
	  kube-system                 kindnet-bpwjl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m10s
	  kube-system                 kube-apiserver-ha-163902             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-controller-manager-ha-163902    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-proxy-wxrsv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-scheduler-ha-163902             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-vip-ha-163902                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m9s   kube-proxy       
	  Normal  Starting                 6m14s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m14s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m14s  kubelet          Node ha-163902 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m14s  kubelet          Node ha-163902 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m14s  kubelet          Node ha-163902 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m11s  node-controller  Node ha-163902 event: Registered Node ha-163902 in Controller
	  Normal  NodeReady                5m55s  kubelet          Node ha-163902 status is now: NodeReady
	  Normal  RegisteredNode           5m9s   node-controller  Node ha-163902 event: Registered Node ha-163902 in Controller
	  Normal  RegisteredNode           3m56s  node-controller  Node ha-163902 event: Registered Node ha-163902 in Controller
	
	
	Name:               ha-163902-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-163902-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=ha-163902
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T19_07_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:07:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-163902-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:10:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 19:09:13 +0000   Mon, 19 Aug 2024 19:10:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 19:09:13 +0000   Mon, 19 Aug 2024 19:10:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 19:09:13 +0000   Mon, 19 Aug 2024 19:10:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 19:09:13 +0000   Mon, 19 Aug 2024 19:10:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.162
	  Hostname:    ha-163902-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4ebc4d6f40f47d9854129310dcf34d7
	  System UUID:                d4ebc4d6-f40f-47d9-8541-29310dcf34d7
	  Boot ID:                    716d5440-ffff-4957-be6f-50e03e7b2422
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9zj57                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-163902-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m15s
	  kube-system                 kindnet-97cnn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m17s
	  kube-system                 kube-apiserver-ha-163902-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-controller-manager-ha-163902-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-proxy-4whvs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-scheduler-ha-163902-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-vip-ha-163902-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m17s (x8 over 5m17s)  kubelet          Node ha-163902-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m17s (x8 over 5m17s)  kubelet          Node ha-163902-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m17s (x7 over 5m17s)  kubelet          Node ha-163902-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-163902-m02 event: Registered Node ha-163902-m02 in Controller
	  Normal  RegisteredNode           5m9s                   node-controller  Node ha-163902-m02 event: Registered Node ha-163902-m02 in Controller
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-163902-m02 event: Registered Node ha-163902-m02 in Controller
	  Normal  NodeNotReady             101s                   node-controller  Node ha-163902-m02 status is now: NodeNotReady
	
	
	Name:               ha-163902-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-163902-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=ha-163902
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T19_08_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:08:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-163902-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:12:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:09:23 +0000   Mon, 19 Aug 2024 19:08:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:09:23 +0000   Mon, 19 Aug 2024 19:08:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:09:23 +0000   Mon, 19 Aug 2024 19:08:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:09:23 +0000   Mon, 19 Aug 2024 19:08:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.59
	  Hostname:    ha-163902-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4a4497fbf7a43159de7a77620b40e05
	  System UUID:                c4a4497f-bf7a-4315-9de7-a77620b40e05
	  Boot ID:                    9c2df4b5-f5ca-406f-a65a-a8ee6263b172
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4hqxq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-163902-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m3s
	  kube-system                 kindnet-72q7r                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m5s
	  kube-system                 kube-apiserver-ha-163902-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-controller-manager-ha-163902-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-proxy-xq852                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-ha-163902-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-vip-ha-163902-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m5s (x8 over 4m5s)  kubelet          Node ha-163902-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x8 over 4m5s)  kubelet          Node ha-163902-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x7 over 4m5s)  kubelet          Node ha-163902-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-163902-m03 event: Registered Node ha-163902-m03 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-163902-m03 event: Registered Node ha-163902-m03 in Controller
	  Normal  RegisteredNode           3m56s                node-controller  Node ha-163902-m03 event: Registered Node ha-163902-m03 in Controller
	
	
	Name:               ha-163902-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-163902-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=ha-163902
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T19_09_27_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:09:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-163902-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:12:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:09:57 +0000   Mon, 19 Aug 2024 19:09:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:09:57 +0000   Mon, 19 Aug 2024 19:09:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:09:57 +0000   Mon, 19 Aug 2024 19:09:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:09:57 +0000   Mon, 19 Aug 2024 19:09:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.130
	  Hostname:    ha-163902-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d771c9152e0748dca0ecbcee5197aaea
	  System UUID:                d771c915-2e07-48dc-a0ec-bcee5197aaea
	  Boot ID:                    2128ff13-241c-4434-b6fe-09c16a15357c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-plbmk       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m1s
	  kube-system                 kube-proxy-9b77p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-163902-m04 event: Registered Node ha-163902-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m1s)  kubelet          Node ha-163902-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m1s)  kubelet          Node ha-163902-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m1s)  kubelet          Node ha-163902-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-163902-m04 event: Registered Node ha-163902-m04 in Controller
	  Normal  RegisteredNode           2m56s                node-controller  Node ha-163902-m04 event: Registered Node ha-163902-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-163902-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug19 19:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047802] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.035962] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.765720] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.945917] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.569397] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.607321] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.061246] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063525] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.198071] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.117006] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.272735] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[Aug19 19:06] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +3.660968] systemd-fstab-generator[894]: Ignoring "noauto" option for root device
	[  +0.062148] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.174652] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.082985] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.372328] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.696903] kauditd_printk_skb: 23 callbacks suppressed
	[Aug19 19:07] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732] <==
	{"level":"warn","ts":"2024-08-19T19:12:27.181482Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.281516Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.544960Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.553324Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.558491Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.580955Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.581740Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.592059Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.598520Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.602029Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.605387Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.610841Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.616480Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.624249Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.627925Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.631518Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.639509Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.646700Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.652578Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.656291Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.659726Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.663988Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.669910Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.677327Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:12:27.681714Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:12:27 up 6 min,  0 users,  load average: 0.09, 0.16, 0.09
	Linux ha-163902 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2] <==
	I0819 19:11:52.633941       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	I0819 19:12:02.632237       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0819 19:12:02.632284       1 main.go:299] handling current node
	I0819 19:12:02.632302       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0819 19:12:02.632309       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	I0819 19:12:02.632438       1 main.go:295] Handling node with IPs: map[192.168.39.59:{}]
	I0819 19:12:02.632459       1 main.go:322] Node ha-163902-m03 has CIDR [10.244.2.0/24] 
	I0819 19:12:02.632513       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0819 19:12:02.632519       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	I0819 19:12:12.633821       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0819 19:12:12.633938       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	I0819 19:12:12.634112       1 main.go:295] Handling node with IPs: map[192.168.39.59:{}]
	I0819 19:12:12.634195       1 main.go:322] Node ha-163902-m03 has CIDR [10.244.2.0/24] 
	I0819 19:12:12.634301       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0819 19:12:12.634327       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	I0819 19:12:12.634387       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0819 19:12:12.634406       1 main.go:299] handling current node
	I0819 19:12:22.624602       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0819 19:12:22.624708       1 main.go:299] handling current node
	I0819 19:12:22.624736       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0819 19:12:22.624759       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	I0819 19:12:22.624919       1 main.go:295] Handling node with IPs: map[192.168.39.59:{}]
	I0819 19:12:22.624942       1 main.go:322] Node ha-163902-m03 has CIDR [10.244.2.0/24] 
	I0819 19:12:22.625005       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0819 19:12:22.625023       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8fca5e9aea9309a38211d91fdbe50e67041a24d8dc547a7cff8edefeb4c57ae6] <==
	I0819 19:06:11.722524       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0819 19:06:11.729537       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.227]
	I0819 19:06:11.731100       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 19:06:11.740780       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 19:06:11.907361       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 19:06:13.440920       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 19:06:13.469083       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0819 19:06:13.484596       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 19:06:17.359540       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0819 19:06:17.507673       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0819 19:08:54.576415       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46604: use of closed network connection
	E0819 19:08:54.772351       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46608: use of closed network connection
	E0819 19:08:54.961822       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46630: use of closed network connection
	E0819 19:08:55.182737       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46642: use of closed network connection
	E0819 19:08:55.374304       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46666: use of closed network connection
	E0819 19:08:55.562243       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46684: use of closed network connection
	E0819 19:08:55.753509       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46698: use of closed network connection
	E0819 19:08:55.937825       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46722: use of closed network connection
	E0819 19:08:56.128562       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46734: use of closed network connection
	E0819 19:08:56.430909       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46756: use of closed network connection
	E0819 19:08:56.599912       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46778: use of closed network connection
	E0819 19:08:56.783300       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46798: use of closed network connection
	E0819 19:08:56.967242       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46818: use of closed network connection
	E0819 19:08:57.166319       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46834: use of closed network connection
	E0819 19:08:57.342330       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46850: use of closed network connection
	
	
	==> kube-controller-manager [63a9dbc3e9af7e2076b8b04c548b2a1cb5c385838357a95e7bc2a86709324ae5] <==
	I0819 19:09:26.888205       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-163902-m04" podCIDRs=["10.244.3.0/24"]
	I0819 19:09:26.888257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:26.888290       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:26.897777       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:26.955550       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-163902-m04"
	I0819 19:09:27.071653       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:27.261410       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:27.495123       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:28.729442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:28.786826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:31.078719       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:31.104740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:37.267662       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:46.368127       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:46.368769       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-163902-m04"
	I0819 19:09:46.380772       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:46.972463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:57.505219       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:10:46.108790       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-163902-m04"
	I0819 19:10:46.108888       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m02"
	I0819 19:10:46.128703       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m02"
	I0819 19:10:46.204463       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.13391ms"
	I0819 19:10:46.205139       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.506µs"
	I0819 19:10:47.082631       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m02"
	I0819 19:10:51.319552       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m02"
	
	
	==> kube-proxy [db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 19:06:18.416775       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 19:06:18.428570       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.227"]
	E0819 19:06:18.429535       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 19:06:18.540472       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 19:06:18.540525       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 19:06:18.540550       1 server_linux.go:169] "Using iptables Proxier"
	I0819 19:06:18.546347       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 19:06:18.546579       1 server.go:483] "Version info" version="v1.31.0"
	I0819 19:06:18.546589       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:06:18.547809       1 config.go:197] "Starting service config controller"
	I0819 19:06:18.547832       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 19:06:18.547851       1 config.go:104] "Starting endpoint slice config controller"
	I0819 19:06:18.547854       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 19:06:18.549843       1 config.go:326] "Starting node config controller"
	I0819 19:06:18.549853       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 19:06:18.648166       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 19:06:18.648223       1 shared_informer.go:320] Caches are synced for service config
	I0819 19:06:18.650074       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca] <==
	W0819 19:06:10.791025       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 19:06:10.791133       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 19:06:10.813346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 19:06:10.813395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:10.852062       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 19:06:10.852116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:10.897777       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 19:06:10.897836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:10.947072       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 19:06:10.947211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:10.959849       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 19:06:10.960702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:11.033567       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 19:06:11.033616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:11.138095       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 19:06:11.138245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:11.154082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 19:06:11.154232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:11.189865       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 19:06:11.189919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:11.215289       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 19:06:11.215345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0819 19:06:12.630110       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 19:08:50.829572       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9zj57\": pod busybox-7dff88458-9zj57 is already assigned to node \"ha-163902-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-9zj57" node="ha-163902-m03"
	E0819 19:08:50.829705       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9zj57\": pod busybox-7dff88458-9zj57 is already assigned to node \"ha-163902-m02\"" pod="default/busybox-7dff88458-9zj57"
	
	
	==> kubelet <==
	Aug 19 19:11:13 ha-163902 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 19:11:13 ha-163902 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 19:11:13 ha-163902 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 19:11:13 ha-163902 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:11:13 ha-163902 kubelet[1316]: E0819 19:11:13.512827    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094673512559174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:11:13 ha-163902 kubelet[1316]: E0819 19:11:13.512857    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094673512559174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:11:23 ha-163902 kubelet[1316]: E0819 19:11:23.515685    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094683514478125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:11:23 ha-163902 kubelet[1316]: E0819 19:11:23.516062    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094683514478125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:11:33 ha-163902 kubelet[1316]: E0819 19:11:33.518455    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094693518128459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:11:33 ha-163902 kubelet[1316]: E0819 19:11:33.518478    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094693518128459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:11:43 ha-163902 kubelet[1316]: E0819 19:11:43.520014    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094703519646074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:11:43 ha-163902 kubelet[1316]: E0819 19:11:43.520132    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094703519646074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:11:53 ha-163902 kubelet[1316]: E0819 19:11:53.522009    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094713521610121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:11:53 ha-163902 kubelet[1316]: E0819 19:11:53.522440    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094713521610121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:12:03 ha-163902 kubelet[1316]: E0819 19:12:03.525323    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094723524793133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:12:03 ha-163902 kubelet[1316]: E0819 19:12:03.525677    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094723524793133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:12:13 ha-163902 kubelet[1316]: E0819 19:12:13.380631    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 19:12:13 ha-163902 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 19:12:13 ha-163902 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 19:12:13 ha-163902 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 19:12:13 ha-163902 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:12:13 ha-163902 kubelet[1316]: E0819 19:12:13.529405    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094733528892278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:12:13 ha-163902 kubelet[1316]: E0819 19:12:13.529467    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094733528892278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:12:23 ha-163902 kubelet[1316]: E0819 19:12:23.531547    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094743531190273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:12:23 ha-163902 kubelet[1316]: E0819 19:12:23.531996    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094743531190273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-163902 -n ha-163902
helpers_test.go:261: (dbg) Run:  kubectl --context ha-163902 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (57.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr: exit status 3 (3.195982986s)

                                                
                                                
-- stdout --
	ha-163902
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-163902-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:12:32.312289  456824 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:12:32.312587  456824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:12:32.312599  456824 out.go:358] Setting ErrFile to fd 2...
	I0819 19:12:32.312604  456824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:12:32.312887  456824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:12:32.313159  456824 out.go:352] Setting JSON to false
	I0819 19:12:32.313198  456824 mustload.go:65] Loading cluster: ha-163902
	I0819 19:12:32.313327  456824 notify.go:220] Checking for updates...
	I0819 19:12:32.313691  456824 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:32.313711  456824 status.go:255] checking status of ha-163902 ...
	I0819 19:12:32.314107  456824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:32.314168  456824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:32.334774  456824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0819 19:12:32.335482  456824 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:32.336097  456824 main.go:141] libmachine: Using API Version  1
	I0819 19:12:32.336120  456824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:32.336596  456824 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:32.336804  456824 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:12:32.338613  456824 status.go:330] ha-163902 host status = "Running" (err=<nil>)
	I0819 19:12:32.338637  456824 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:12:32.338964  456824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:32.339003  456824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:32.354929  456824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34005
	I0819 19:12:32.355385  456824 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:32.355871  456824 main.go:141] libmachine: Using API Version  1
	I0819 19:12:32.355893  456824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:32.356374  456824 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:32.356576  456824 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:12:32.359851  456824 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:32.360291  456824 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:12:32.360314  456824 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:32.360491  456824 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:12:32.360817  456824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:32.360863  456824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:32.377345  456824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45417
	I0819 19:12:32.377987  456824 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:32.378513  456824 main.go:141] libmachine: Using API Version  1
	I0819 19:12:32.378538  456824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:32.378856  456824 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:32.379066  456824 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:12:32.379260  456824 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:32.379304  456824 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:12:32.382142  456824 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:32.382543  456824 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:12:32.382575  456824 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:32.382790  456824 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:12:32.382990  456824 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:12:32.383152  456824 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:12:32.383399  456824 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:12:32.464733  456824 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:32.471455  456824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:12:32.488050  456824 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:12:32.488092  456824 api_server.go:166] Checking apiserver status ...
	I0819 19:12:32.488134  456824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:32.502598  456824 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup
	W0819 19:12:32.513257  456824 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:32.513324  456824 ssh_runner.go:195] Run: ls
	I0819 19:12:32.518620  456824 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:12:32.523186  456824 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:12:32.523239  456824 status.go:422] ha-163902 apiserver status = Running (err=<nil>)
	I0819 19:12:32.523257  456824 status.go:257] ha-163902 status: &{Name:ha-163902 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:12:32.523283  456824 status.go:255] checking status of ha-163902-m02 ...
	I0819 19:12:32.523693  456824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:32.523722  456824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:32.541782  456824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32855
	I0819 19:12:32.542315  456824 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:32.542881  456824 main.go:141] libmachine: Using API Version  1
	I0819 19:12:32.542910  456824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:32.543283  456824 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:32.543438  456824 main.go:141] libmachine: (ha-163902-m02) Calling .GetState
	I0819 19:12:32.545294  456824 status.go:330] ha-163902-m02 host status = "Running" (err=<nil>)
	I0819 19:12:32.545314  456824 host.go:66] Checking if "ha-163902-m02" exists ...
	I0819 19:12:32.545611  456824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:32.545636  456824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:32.560623  456824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35997
	I0819 19:12:32.561257  456824 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:32.561772  456824 main.go:141] libmachine: Using API Version  1
	I0819 19:12:32.561791  456824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:32.562165  456824 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:32.562363  456824 main.go:141] libmachine: (ha-163902-m02) Calling .GetIP
	I0819 19:12:32.565577  456824 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:32.566189  456824 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:12:32.566224  456824 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:32.566416  456824 host.go:66] Checking if "ha-163902-m02" exists ...
	I0819 19:12:32.566756  456824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:32.566795  456824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:32.582255  456824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45353
	I0819 19:12:32.582775  456824 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:32.583346  456824 main.go:141] libmachine: Using API Version  1
	I0819 19:12:32.583374  456824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:32.583736  456824 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:32.583957  456824 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:12:32.584263  456824 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:32.584290  456824 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:12:32.587378  456824 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:32.587849  456824 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:12:32.587880  456824 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:32.588035  456824 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:12:32.588219  456824 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:12:32.588357  456824 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:12:32.588457  456824 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa Username:docker}
	W0819 19:12:35.109483  456824 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.162:22: connect: no route to host
	W0819 19:12:35.109595  456824 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E0819 19:12:35.109618  456824 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	I0819 19:12:35.109629  456824 status.go:257] ha-163902-m02 status: &{Name:ha-163902-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 19:12:35.109662  456824 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	I0819 19:12:35.109674  456824 status.go:255] checking status of ha-163902-m03 ...
	I0819 19:12:35.109995  456824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:35.110051  456824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:35.126658  456824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43539
	I0819 19:12:35.127164  456824 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:35.127653  456824 main.go:141] libmachine: Using API Version  1
	I0819 19:12:35.127742  456824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:35.128054  456824 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:35.128262  456824 main.go:141] libmachine: (ha-163902-m03) Calling .GetState
	I0819 19:12:35.129944  456824 status.go:330] ha-163902-m03 host status = "Running" (err=<nil>)
	I0819 19:12:35.129963  456824 host.go:66] Checking if "ha-163902-m03" exists ...
	I0819 19:12:35.130290  456824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:35.130334  456824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:35.146305  456824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43355
	I0819 19:12:35.146834  456824 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:35.147334  456824 main.go:141] libmachine: Using API Version  1
	I0819 19:12:35.147359  456824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:35.147739  456824 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:35.147949  456824 main.go:141] libmachine: (ha-163902-m03) Calling .GetIP
	I0819 19:12:35.151086  456824 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:35.151504  456824 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:12:35.151534  456824 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:35.151686  456824 host.go:66] Checking if "ha-163902-m03" exists ...
	I0819 19:12:35.152028  456824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:35.152074  456824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:35.167843  456824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I0819 19:12:35.168263  456824 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:35.168804  456824 main.go:141] libmachine: Using API Version  1
	I0819 19:12:35.168823  456824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:35.169099  456824 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:35.169318  456824 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:12:35.169502  456824 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:35.169536  456824 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:12:35.172497  456824 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:35.172964  456824 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:12:35.172993  456824 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:35.173152  456824 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:12:35.173376  456824 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:12:35.173528  456824 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:12:35.173664  456824 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa Username:docker}
	I0819 19:12:35.252426  456824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:12:35.267522  456824 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:12:35.267557  456824 api_server.go:166] Checking apiserver status ...
	I0819 19:12:35.267591  456824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:35.281494  456824 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0819 19:12:35.291518  456824 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:35.291592  456824 ssh_runner.go:195] Run: ls
	I0819 19:12:35.296304  456824 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:12:35.302077  456824 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:12:35.302111  456824 status.go:422] ha-163902-m03 apiserver status = Running (err=<nil>)
	I0819 19:12:35.302124  456824 status.go:257] ha-163902-m03 status: &{Name:ha-163902-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:12:35.302148  456824 status.go:255] checking status of ha-163902-m04 ...
	I0819 19:12:35.302462  456824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:35.302494  456824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:35.319280  456824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0819 19:12:35.319747  456824 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:35.320182  456824 main.go:141] libmachine: Using API Version  1
	I0819 19:12:35.320211  456824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:35.320552  456824 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:35.320804  456824 main.go:141] libmachine: (ha-163902-m04) Calling .GetState
	I0819 19:12:35.322658  456824 status.go:330] ha-163902-m04 host status = "Running" (err=<nil>)
	I0819 19:12:35.322681  456824 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:12:35.323065  456824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:35.323097  456824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:35.338805  456824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38279
	I0819 19:12:35.339241  456824 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:35.339719  456824 main.go:141] libmachine: Using API Version  1
	I0819 19:12:35.339741  456824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:35.340066  456824 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:35.340263  456824 main.go:141] libmachine: (ha-163902-m04) Calling .GetIP
	I0819 19:12:35.343184  456824 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:35.343694  456824 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:09:12 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:12:35.343730  456824 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:35.343968  456824 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:12:35.344298  456824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:35.344337  456824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:35.362066  456824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40063
	I0819 19:12:35.362617  456824 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:35.363171  456824 main.go:141] libmachine: Using API Version  1
	I0819 19:12:35.363201  456824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:35.363498  456824 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:35.363691  456824 main.go:141] libmachine: (ha-163902-m04) Calling .DriverName
	I0819 19:12:35.363881  456824 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:35.363900  456824 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHHostname
	I0819 19:12:35.367044  456824 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:35.367485  456824 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:09:12 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:12:35.367510  456824 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:35.367657  456824 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHPort
	I0819 19:12:35.367840  456824 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHKeyPath
	I0819 19:12:35.367970  456824 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHUsername
	I0819 19:12:35.368134  456824 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m04/id_rsa Username:docker}
	I0819 19:12:35.448015  456824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:12:35.462000  456824 status.go:257] ha-163902-m04 status: &{Name:ha-163902-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr: exit status 3 (4.908160505s)

                                                
                                                
-- stdout --
	ha-163902
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-163902-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:12:36.747237  456924 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:12:36.747342  456924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:12:36.747348  456924 out.go:358] Setting ErrFile to fd 2...
	I0819 19:12:36.747354  456924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:12:36.747545  456924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:12:36.747755  456924 out.go:352] Setting JSON to false
	I0819 19:12:36.747785  456924 mustload.go:65] Loading cluster: ha-163902
	I0819 19:12:36.747889  456924 notify.go:220] Checking for updates...
	I0819 19:12:36.748253  456924 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:36.748271  456924 status.go:255] checking status of ha-163902 ...
	I0819 19:12:36.748685  456924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:36.748739  456924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:36.764545  456924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41617
	I0819 19:12:36.765015  456924 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:36.765640  456924 main.go:141] libmachine: Using API Version  1
	I0819 19:12:36.765668  456924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:36.766168  456924 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:36.766391  456924 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:12:36.768428  456924 status.go:330] ha-163902 host status = "Running" (err=<nil>)
	I0819 19:12:36.768456  456924 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:12:36.768788  456924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:36.768837  456924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:36.784528  456924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35897
	I0819 19:12:36.785018  456924 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:36.785577  456924 main.go:141] libmachine: Using API Version  1
	I0819 19:12:36.785612  456924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:36.785991  456924 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:36.786233  456924 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:12:36.789430  456924 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:36.789873  456924 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:12:36.789899  456924 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:36.790090  456924 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:12:36.790551  456924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:36.790608  456924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:36.807606  456924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45939
	I0819 19:12:36.808401  456924 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:36.808978  456924 main.go:141] libmachine: Using API Version  1
	I0819 19:12:36.809008  456924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:36.809382  456924 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:36.809690  456924 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:12:36.809904  456924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:36.809934  456924 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:12:36.813074  456924 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:36.813572  456924 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:12:36.813603  456924 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:36.813879  456924 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:12:36.814084  456924 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:12:36.814238  456924 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:12:36.814388  456924 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:12:36.892952  456924 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:36.899087  456924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:12:36.913884  456924 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:12:36.913923  456924 api_server.go:166] Checking apiserver status ...
	I0819 19:12:36.913958  456924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:36.929246  456924 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup
	W0819 19:12:36.942813  456924 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:36.942884  456924 ssh_runner.go:195] Run: ls
	I0819 19:12:36.947847  456924 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:12:36.954055  456924 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:12:36.954087  456924 status.go:422] ha-163902 apiserver status = Running (err=<nil>)
	I0819 19:12:36.954098  456924 status.go:257] ha-163902 status: &{Name:ha-163902 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:12:36.954118  456924 status.go:255] checking status of ha-163902-m02 ...
	I0819 19:12:36.954453  456924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:36.954486  456924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:36.970022  456924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43847
	I0819 19:12:36.970421  456924 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:36.970919  456924 main.go:141] libmachine: Using API Version  1
	I0819 19:12:36.970941  456924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:36.971266  456924 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:36.971502  456924 main.go:141] libmachine: (ha-163902-m02) Calling .GetState
	I0819 19:12:36.972847  456924 status.go:330] ha-163902-m02 host status = "Running" (err=<nil>)
	I0819 19:12:36.972868  456924 host.go:66] Checking if "ha-163902-m02" exists ...
	I0819 19:12:36.973169  456924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:36.973210  456924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:36.989279  456924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35473
	I0819 19:12:36.989740  456924 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:36.990239  456924 main.go:141] libmachine: Using API Version  1
	I0819 19:12:36.990261  456924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:36.990557  456924 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:36.990744  456924 main.go:141] libmachine: (ha-163902-m02) Calling .GetIP
	I0819 19:12:36.993074  456924 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:36.993566  456924 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:12:36.993596  456924 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:36.993767  456924 host.go:66] Checking if "ha-163902-m02" exists ...
	I0819 19:12:36.994113  456924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:36.994156  456924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:37.009789  456924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38477
	I0819 19:12:37.010303  456924 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:37.010769  456924 main.go:141] libmachine: Using API Version  1
	I0819 19:12:37.010791  456924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:37.011110  456924 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:37.011295  456924 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:12:37.011475  456924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:37.011500  456924 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:12:37.014702  456924 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:37.015218  456924 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:12:37.015246  456924 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:37.015440  456924 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:12:37.015675  456924 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:12:37.015885  456924 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:12:37.016060  456924 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa Username:docker}
	W0819 19:12:38.181427  456924 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.162:22: connect: no route to host
	I0819 19:12:38.181509  456924 retry.go:31] will retry after 169.066267ms: dial tcp 192.168.39.162:22: connect: no route to host
	W0819 19:12:41.253485  456924 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.162:22: connect: no route to host
	W0819 19:12:41.253596  456924 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E0819 19:12:41.253617  456924 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	I0819 19:12:41.253624  456924 status.go:257] ha-163902-m02 status: &{Name:ha-163902-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 19:12:41.253649  456924 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	I0819 19:12:41.253657  456924 status.go:255] checking status of ha-163902-m03 ...
	I0819 19:12:41.254012  456924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:41.254056  456924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:41.269513  456924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34509
	I0819 19:12:41.269993  456924 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:41.270551  456924 main.go:141] libmachine: Using API Version  1
	I0819 19:12:41.270581  456924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:41.270894  456924 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:41.271115  456924 main.go:141] libmachine: (ha-163902-m03) Calling .GetState
	I0819 19:12:41.272790  456924 status.go:330] ha-163902-m03 host status = "Running" (err=<nil>)
	I0819 19:12:41.272809  456924 host.go:66] Checking if "ha-163902-m03" exists ...
	I0819 19:12:41.273106  456924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:41.273164  456924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:41.288960  456924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42423
	I0819 19:12:41.289464  456924 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:41.289971  456924 main.go:141] libmachine: Using API Version  1
	I0819 19:12:41.289996  456924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:41.290336  456924 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:41.290539  456924 main.go:141] libmachine: (ha-163902-m03) Calling .GetIP
	I0819 19:12:41.293287  456924 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:41.293644  456924 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:12:41.293662  456924 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:41.293871  456924 host.go:66] Checking if "ha-163902-m03" exists ...
	I0819 19:12:41.294265  456924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:41.294309  456924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:41.309689  456924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38745
	I0819 19:12:41.310272  456924 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:41.310757  456924 main.go:141] libmachine: Using API Version  1
	I0819 19:12:41.310779  456924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:41.311110  456924 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:41.311357  456924 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:12:41.311549  456924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:41.311581  456924 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:12:41.314433  456924 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:41.314834  456924 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:12:41.314866  456924 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:41.315085  456924 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:12:41.315302  456924 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:12:41.315460  456924 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:12:41.315607  456924 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa Username:docker}
	I0819 19:12:41.392481  456924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:12:41.408418  456924 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:12:41.408454  456924 api_server.go:166] Checking apiserver status ...
	I0819 19:12:41.408493  456924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:41.422478  456924 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0819 19:12:41.434077  456924 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:41.434142  456924 ssh_runner.go:195] Run: ls
	I0819 19:12:41.438858  456924 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:12:41.443308  456924 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:12:41.443341  456924 status.go:422] ha-163902-m03 apiserver status = Running (err=<nil>)
	I0819 19:12:41.443354  456924 status.go:257] ha-163902-m03 status: &{Name:ha-163902-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:12:41.443376  456924 status.go:255] checking status of ha-163902-m04 ...
	I0819 19:12:41.443696  456924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:41.443726  456924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:41.459379  456924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37905
	I0819 19:12:41.459856  456924 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:41.460347  456924 main.go:141] libmachine: Using API Version  1
	I0819 19:12:41.460369  456924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:41.460701  456924 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:41.460889  456924 main.go:141] libmachine: (ha-163902-m04) Calling .GetState
	I0819 19:12:41.462620  456924 status.go:330] ha-163902-m04 host status = "Running" (err=<nil>)
	I0819 19:12:41.462639  456924 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:12:41.462973  456924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:41.463020  456924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:41.480098  456924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
	I0819 19:12:41.480660  456924 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:41.481200  456924 main.go:141] libmachine: Using API Version  1
	I0819 19:12:41.481230  456924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:41.481555  456924 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:41.481731  456924 main.go:141] libmachine: (ha-163902-m04) Calling .GetIP
	I0819 19:12:41.484406  456924 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:41.484874  456924 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:09:12 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:12:41.484931  456924 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:41.485078  456924 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:12:41.485544  456924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:41.485580  456924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:41.501605  456924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38639
	I0819 19:12:41.502059  456924 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:41.502519  456924 main.go:141] libmachine: Using API Version  1
	I0819 19:12:41.502539  456924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:41.502858  456924 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:41.503056  456924 main.go:141] libmachine: (ha-163902-m04) Calling .DriverName
	I0819 19:12:41.503234  456924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:41.503252  456924 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHHostname
	I0819 19:12:41.506155  456924 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:41.506600  456924 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:09:12 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:12:41.506622  456924 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:41.506823  456924 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHPort
	I0819 19:12:41.507040  456924 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHKeyPath
	I0819 19:12:41.507196  456924 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHUsername
	I0819 19:12:41.507341  456924 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m04/id_rsa Username:docker}
	I0819 19:12:41.592617  456924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:12:41.607097  456924 status.go:257] ha-163902-m04 status: &{Name:ha-163902-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr: exit status 3 (5.06304954s)

                                                
                                                
-- stdout --
	ha-163902
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-163902-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:12:42.735457  457032 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:12:42.735717  457032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:12:42.735726  457032 out.go:358] Setting ErrFile to fd 2...
	I0819 19:12:42.735731  457032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:12:42.735914  457032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:12:42.736081  457032 out.go:352] Setting JSON to false
	I0819 19:12:42.736107  457032 mustload.go:65] Loading cluster: ha-163902
	I0819 19:12:42.736169  457032 notify.go:220] Checking for updates...
	I0819 19:12:42.736485  457032 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:42.736500  457032 status.go:255] checking status of ha-163902 ...
	I0819 19:12:42.736897  457032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:42.736952  457032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:42.752867  457032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32973
	I0819 19:12:42.753451  457032 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:42.754224  457032 main.go:141] libmachine: Using API Version  1
	I0819 19:12:42.754250  457032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:42.754637  457032 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:42.754850  457032 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:12:42.756653  457032 status.go:330] ha-163902 host status = "Running" (err=<nil>)
	I0819 19:12:42.756675  457032 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:12:42.757102  457032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:42.757209  457032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:42.773218  457032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0819 19:12:42.773663  457032 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:42.774295  457032 main.go:141] libmachine: Using API Version  1
	I0819 19:12:42.774344  457032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:42.774729  457032 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:42.774962  457032 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:12:42.777904  457032 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:42.778376  457032 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:12:42.778413  457032 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:42.778564  457032 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:12:42.778883  457032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:42.778932  457032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:42.795638  457032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46035
	I0819 19:12:42.796071  457032 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:42.796543  457032 main.go:141] libmachine: Using API Version  1
	I0819 19:12:42.796566  457032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:42.797001  457032 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:42.797227  457032 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:12:42.797418  457032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:42.797438  457032 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:12:42.800383  457032 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:42.800847  457032 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:12:42.800879  457032 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:42.801077  457032 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:12:42.801274  457032 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:12:42.801420  457032 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:12:42.801588  457032 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:12:42.888730  457032 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:42.895199  457032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:12:42.910500  457032 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:12:42.910546  457032 api_server.go:166] Checking apiserver status ...
	I0819 19:12:42.910584  457032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:42.924563  457032 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup
	W0819 19:12:42.934160  457032 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:42.934226  457032 ssh_runner.go:195] Run: ls
	I0819 19:12:42.938769  457032 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:12:42.944788  457032 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:12:42.944822  457032 status.go:422] ha-163902 apiserver status = Running (err=<nil>)
	I0819 19:12:42.944838  457032 status.go:257] ha-163902 status: &{Name:ha-163902 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:12:42.944861  457032 status.go:255] checking status of ha-163902-m02 ...
	I0819 19:12:42.945216  457032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:42.945253  457032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:42.960451  457032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34023
	I0819 19:12:42.961007  457032 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:42.961639  457032 main.go:141] libmachine: Using API Version  1
	I0819 19:12:42.961661  457032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:42.962108  457032 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:42.962309  457032 main.go:141] libmachine: (ha-163902-m02) Calling .GetState
	I0819 19:12:42.964030  457032 status.go:330] ha-163902-m02 host status = "Running" (err=<nil>)
	I0819 19:12:42.964055  457032 host.go:66] Checking if "ha-163902-m02" exists ...
	I0819 19:12:42.964337  457032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:42.964361  457032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:42.979617  457032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46233
	I0819 19:12:42.980157  457032 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:42.980664  457032 main.go:141] libmachine: Using API Version  1
	I0819 19:12:42.980684  457032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:42.980992  457032 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:42.981193  457032 main.go:141] libmachine: (ha-163902-m02) Calling .GetIP
	I0819 19:12:42.983721  457032 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:42.984112  457032 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:12:42.984142  457032 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:42.984328  457032 host.go:66] Checking if "ha-163902-m02" exists ...
	I0819 19:12:42.984663  457032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:42.984703  457032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.000569  457032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46529
	I0819 19:12:43.001064  457032 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.001583  457032 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.001610  457032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.001934  457032 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.002113  457032 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:12:43.002322  457032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:43.002347  457032 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:12:43.005373  457032 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:43.005754  457032 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:12:43.005784  457032 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:43.005917  457032 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:12:43.006124  457032 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:12:43.006265  457032 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:12:43.006392  457032 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa Username:docker}
	W0819 19:12:44.329540  457032 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.162:22: connect: no route to host
	I0819 19:12:44.329623  457032 retry.go:31] will retry after 346.58964ms: dial tcp 192.168.39.162:22: connect: no route to host
	W0819 19:12:47.397517  457032 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.162:22: connect: no route to host
	W0819 19:12:47.397659  457032 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E0819 19:12:47.397688  457032 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	I0819 19:12:47.397698  457032 status.go:257] ha-163902-m02 status: &{Name:ha-163902-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 19:12:47.397738  457032 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	I0819 19:12:47.397748  457032 status.go:255] checking status of ha-163902-m03 ...
	I0819 19:12:47.398119  457032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:47.398182  457032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:47.413558  457032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39895
	I0819 19:12:47.414029  457032 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:47.414472  457032 main.go:141] libmachine: Using API Version  1
	I0819 19:12:47.414495  457032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:47.414912  457032 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:47.415126  457032 main.go:141] libmachine: (ha-163902-m03) Calling .GetState
	I0819 19:12:47.416956  457032 status.go:330] ha-163902-m03 host status = "Running" (err=<nil>)
	I0819 19:12:47.416978  457032 host.go:66] Checking if "ha-163902-m03" exists ...
	I0819 19:12:47.417402  457032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:47.417447  457032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:47.432681  457032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0819 19:12:47.433223  457032 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:47.433753  457032 main.go:141] libmachine: Using API Version  1
	I0819 19:12:47.433773  457032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:47.434106  457032 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:47.434296  457032 main.go:141] libmachine: (ha-163902-m03) Calling .GetIP
	I0819 19:12:47.437214  457032 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:47.437637  457032 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:12:47.437677  457032 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:47.437842  457032 host.go:66] Checking if "ha-163902-m03" exists ...
	I0819 19:12:47.438192  457032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:47.438243  457032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:47.453783  457032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38769
	I0819 19:12:47.454245  457032 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:47.454875  457032 main.go:141] libmachine: Using API Version  1
	I0819 19:12:47.454911  457032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:47.455315  457032 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:47.455524  457032 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:12:47.455757  457032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:47.455782  457032 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:12:47.458717  457032 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:47.459063  457032 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:12:47.459094  457032 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:47.459276  457032 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:12:47.459476  457032 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:12:47.459626  457032 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:12:47.459755  457032 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa Username:docker}
	I0819 19:12:47.540337  457032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:12:47.558198  457032 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:12:47.558235  457032 api_server.go:166] Checking apiserver status ...
	I0819 19:12:47.558282  457032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:47.572892  457032 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0819 19:12:47.582889  457032 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:47.582950  457032 ssh_runner.go:195] Run: ls
	I0819 19:12:47.587746  457032 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:12:47.592184  457032 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:12:47.592215  457032 status.go:422] ha-163902-m03 apiserver status = Running (err=<nil>)
	I0819 19:12:47.592223  457032 status.go:257] ha-163902-m03 status: &{Name:ha-163902-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:12:47.592244  457032 status.go:255] checking status of ha-163902-m04 ...
	I0819 19:12:47.592577  457032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:47.592604  457032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:47.608185  457032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I0819 19:12:47.608668  457032 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:47.609204  457032 main.go:141] libmachine: Using API Version  1
	I0819 19:12:47.609227  457032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:47.609600  457032 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:47.609798  457032 main.go:141] libmachine: (ha-163902-m04) Calling .GetState
	I0819 19:12:47.611583  457032 status.go:330] ha-163902-m04 host status = "Running" (err=<nil>)
	I0819 19:12:47.611600  457032 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:12:47.612059  457032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:47.612116  457032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:47.627733  457032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I0819 19:12:47.628198  457032 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:47.628703  457032 main.go:141] libmachine: Using API Version  1
	I0819 19:12:47.628732  457032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:47.629109  457032 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:47.629377  457032 main.go:141] libmachine: (ha-163902-m04) Calling .GetIP
	I0819 19:12:47.632817  457032 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:47.633263  457032 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:09:12 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:12:47.633290  457032 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:47.633510  457032 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:12:47.633844  457032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:47.633890  457032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:47.648917  457032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41395
	I0819 19:12:47.649467  457032 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:47.649983  457032 main.go:141] libmachine: Using API Version  1
	I0819 19:12:47.650007  457032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:47.650308  457032 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:47.650503  457032 main.go:141] libmachine: (ha-163902-m04) Calling .DriverName
	I0819 19:12:47.650678  457032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:47.650700  457032 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHHostname
	I0819 19:12:47.653687  457032 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:47.654143  457032 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:09:12 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:12:47.654186  457032 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:47.654315  457032 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHPort
	I0819 19:12:47.654545  457032 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHKeyPath
	I0819 19:12:47.654701  457032 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHUsername
	I0819 19:12:47.654869  457032 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m04/id_rsa Username:docker}
	I0819 19:12:47.736042  457032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:12:47.750309  457032 status.go:257] ha-163902-m04 status: &{Name:ha-163902-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr: exit status 3 (4.089872565s)

                                                
                                                
-- stdout --
	ha-163902
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-163902-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:12:50.119372  457133 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:12:50.119652  457133 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:12:50.119663  457133 out.go:358] Setting ErrFile to fd 2...
	I0819 19:12:50.119668  457133 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:12:50.119890  457133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:12:50.120145  457133 out.go:352] Setting JSON to false
	I0819 19:12:50.120179  457133 mustload.go:65] Loading cluster: ha-163902
	I0819 19:12:50.120229  457133 notify.go:220] Checking for updates...
	I0819 19:12:50.120598  457133 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:50.120615  457133 status.go:255] checking status of ha-163902 ...
	I0819 19:12:50.121061  457133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:50.121103  457133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:50.137122  457133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40837
	I0819 19:12:50.137690  457133 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:50.138368  457133 main.go:141] libmachine: Using API Version  1
	I0819 19:12:50.138407  457133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:50.138892  457133 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:50.139186  457133 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:12:50.140870  457133 status.go:330] ha-163902 host status = "Running" (err=<nil>)
	I0819 19:12:50.140892  457133 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:12:50.141303  457133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:50.141342  457133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:50.157178  457133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43033
	I0819 19:12:50.157746  457133 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:50.158307  457133 main.go:141] libmachine: Using API Version  1
	I0819 19:12:50.158337  457133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:50.158707  457133 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:50.158901  457133 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:12:50.161673  457133 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:50.162141  457133 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:12:50.162169  457133 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:50.162299  457133 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:12:50.162621  457133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:50.162659  457133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:50.178788  457133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44025
	I0819 19:12:50.179260  457133 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:50.179839  457133 main.go:141] libmachine: Using API Version  1
	I0819 19:12:50.179863  457133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:50.180271  457133 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:50.180477  457133 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:12:50.180748  457133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:50.180772  457133 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:12:50.183508  457133 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:50.183923  457133 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:12:50.183951  457133 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:50.184133  457133 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:12:50.184369  457133 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:12:50.184590  457133 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:12:50.184771  457133 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:12:50.268461  457133 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:50.275189  457133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:12:50.290247  457133 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:12:50.290295  457133 api_server.go:166] Checking apiserver status ...
	I0819 19:12:50.290334  457133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:50.313410  457133 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup
	W0819 19:12:50.323659  457133 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:50.323731  457133 ssh_runner.go:195] Run: ls
	I0819 19:12:50.328327  457133 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:12:50.334167  457133 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:12:50.334197  457133 status.go:422] ha-163902 apiserver status = Running (err=<nil>)
	I0819 19:12:50.334208  457133 status.go:257] ha-163902 status: &{Name:ha-163902 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:12:50.334235  457133 status.go:255] checking status of ha-163902-m02 ...
	I0819 19:12:50.334588  457133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:50.334628  457133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:50.350011  457133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33175
	I0819 19:12:50.350471  457133 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:50.351018  457133 main.go:141] libmachine: Using API Version  1
	I0819 19:12:50.351044  457133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:50.351404  457133 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:50.351622  457133 main.go:141] libmachine: (ha-163902-m02) Calling .GetState
	I0819 19:12:50.353115  457133 status.go:330] ha-163902-m02 host status = "Running" (err=<nil>)
	I0819 19:12:50.353151  457133 host.go:66] Checking if "ha-163902-m02" exists ...
	I0819 19:12:50.353528  457133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:50.353568  457133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:50.368991  457133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46733
	I0819 19:12:50.369529  457133 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:50.370139  457133 main.go:141] libmachine: Using API Version  1
	I0819 19:12:50.370169  457133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:50.370507  457133 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:50.370720  457133 main.go:141] libmachine: (ha-163902-m02) Calling .GetIP
	I0819 19:12:50.373413  457133 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:50.373837  457133 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:12:50.373860  457133 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:50.374025  457133 host.go:66] Checking if "ha-163902-m02" exists ...
	I0819 19:12:50.374317  457133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:50.374355  457133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:50.389973  457133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
	I0819 19:12:50.390477  457133 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:50.391016  457133 main.go:141] libmachine: Using API Version  1
	I0819 19:12:50.391037  457133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:50.391363  457133 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:50.391558  457133 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:12:50.391754  457133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:50.391773  457133 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:12:50.394602  457133 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:50.395007  457133 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:12:50.395035  457133 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:50.395130  457133 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:12:50.395305  457133 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:12:50.395447  457133 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:12:50.395612  457133 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa Username:docker}
	W0819 19:12:50.469393  457133 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.162:22: connect: no route to host
	I0819 19:12:50.469444  457133 retry.go:31] will retry after 272.95885ms: dial tcp 192.168.39.162:22: connect: no route to host
	W0819 19:12:53.797517  457133 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.162:22: connect: no route to host
	W0819 19:12:53.797633  457133 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E0819 19:12:53.797651  457133 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	I0819 19:12:53.797661  457133 status.go:257] ha-163902-m02 status: &{Name:ha-163902-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 19:12:53.797683  457133 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	I0819 19:12:53.797691  457133 status.go:255] checking status of ha-163902-m03 ...
	I0819 19:12:53.798016  457133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:53.798063  457133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:53.813642  457133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45105
	I0819 19:12:53.814166  457133 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:53.814710  457133 main.go:141] libmachine: Using API Version  1
	I0819 19:12:53.814738  457133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:53.815131  457133 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:53.815354  457133 main.go:141] libmachine: (ha-163902-m03) Calling .GetState
	I0819 19:12:53.817472  457133 status.go:330] ha-163902-m03 host status = "Running" (err=<nil>)
	I0819 19:12:53.817495  457133 host.go:66] Checking if "ha-163902-m03" exists ...
	I0819 19:12:53.817931  457133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:53.817984  457133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:53.833715  457133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44113
	I0819 19:12:53.834229  457133 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:53.834710  457133 main.go:141] libmachine: Using API Version  1
	I0819 19:12:53.834739  457133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:53.835117  457133 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:53.835346  457133 main.go:141] libmachine: (ha-163902-m03) Calling .GetIP
	I0819 19:12:53.838367  457133 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:53.838846  457133 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:12:53.838881  457133 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:53.839054  457133 host.go:66] Checking if "ha-163902-m03" exists ...
	I0819 19:12:53.839362  457133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:53.839399  457133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:53.854861  457133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34493
	I0819 19:12:53.855391  457133 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:53.855861  457133 main.go:141] libmachine: Using API Version  1
	I0819 19:12:53.855884  457133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:53.856180  457133 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:53.856420  457133 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:12:53.856625  457133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:53.856650  457133 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:12:53.859637  457133 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:53.860072  457133 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:12:53.860103  457133 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:12:53.860269  457133 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:12:53.860458  457133 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:12:53.860646  457133 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:12:53.860795  457133 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa Username:docker}
	I0819 19:12:53.945349  457133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:12:53.961656  457133 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:12:53.961694  457133 api_server.go:166] Checking apiserver status ...
	I0819 19:12:53.961749  457133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:53.976660  457133 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0819 19:12:53.987026  457133 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:53.987096  457133 ssh_runner.go:195] Run: ls
	I0819 19:12:53.991690  457133 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:12:53.998767  457133 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:12:53.998805  457133 status.go:422] ha-163902-m03 apiserver status = Running (err=<nil>)
	I0819 19:12:53.998819  457133 status.go:257] ha-163902-m03 status: &{Name:ha-163902-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:12:53.998845  457133 status.go:255] checking status of ha-163902-m04 ...
	I0819 19:12:53.999220  457133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:53.999271  457133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:54.014542  457133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37517
	I0819 19:12:54.015006  457133 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:54.015507  457133 main.go:141] libmachine: Using API Version  1
	I0819 19:12:54.015532  457133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:54.015844  457133 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:54.016056  457133 main.go:141] libmachine: (ha-163902-m04) Calling .GetState
	I0819 19:12:54.017876  457133 status.go:330] ha-163902-m04 host status = "Running" (err=<nil>)
	I0819 19:12:54.017898  457133 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:12:54.018254  457133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:54.018301  457133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:54.034547  457133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0819 19:12:54.035100  457133 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:54.035583  457133 main.go:141] libmachine: Using API Version  1
	I0819 19:12:54.035604  457133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:54.035980  457133 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:54.036176  457133 main.go:141] libmachine: (ha-163902-m04) Calling .GetIP
	I0819 19:12:54.039080  457133 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:54.039511  457133 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:09:12 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:12:54.039544  457133 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:54.039718  457133 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:12:54.040137  457133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:54.040181  457133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:54.055842  457133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37751
	I0819 19:12:54.056408  457133 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:54.057032  457133 main.go:141] libmachine: Using API Version  1
	I0819 19:12:54.057052  457133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:54.057460  457133 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:54.057671  457133 main.go:141] libmachine: (ha-163902-m04) Calling .DriverName
	I0819 19:12:54.057919  457133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:54.057941  457133 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHHostname
	I0819 19:12:54.060828  457133 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:54.061260  457133 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:09:12 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:12:54.061295  457133 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:12:54.061463  457133 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHPort
	I0819 19:12:54.061641  457133 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHKeyPath
	I0819 19:12:54.061792  457133 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHUsername
	I0819 19:12:54.061981  457133 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m04/id_rsa Username:docker}
	I0819 19:12:54.148246  457133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:12:54.161515  457133 status.go:257] ha-163902-m04 status: &{Name:ha-163902-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr: exit status 3 (3.738528147s)

                                                
                                                
-- stdout --
	ha-163902
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-163902-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:12:57.023368  457249 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:12:57.023686  457249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:12:57.023698  457249 out.go:358] Setting ErrFile to fd 2...
	I0819 19:12:57.023703  457249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:12:57.023883  457249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:12:57.024070  457249 out.go:352] Setting JSON to false
	I0819 19:12:57.024097  457249 mustload.go:65] Loading cluster: ha-163902
	I0819 19:12:57.024253  457249 notify.go:220] Checking for updates...
	I0819 19:12:57.024510  457249 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:57.024528  457249 status.go:255] checking status of ha-163902 ...
	I0819 19:12:57.024984  457249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:57.025048  457249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:57.042963  457249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38409
	I0819 19:12:57.043439  457249 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:57.044045  457249 main.go:141] libmachine: Using API Version  1
	I0819 19:12:57.044075  457249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:57.044623  457249 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:57.044854  457249 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:12:57.046751  457249 status.go:330] ha-163902 host status = "Running" (err=<nil>)
	I0819 19:12:57.046771  457249 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:12:57.047082  457249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:57.047126  457249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:57.063541  457249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37651
	I0819 19:12:57.064189  457249 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:57.064895  457249 main.go:141] libmachine: Using API Version  1
	I0819 19:12:57.064953  457249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:57.065325  457249 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:57.065558  457249 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:12:57.068850  457249 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:57.069379  457249 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:12:57.069410  457249 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:57.069630  457249 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:12:57.069944  457249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:57.069995  457249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:57.090384  457249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44911
	I0819 19:12:57.090863  457249 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:57.091397  457249 main.go:141] libmachine: Using API Version  1
	I0819 19:12:57.091422  457249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:57.091855  457249 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:57.092072  457249 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:12:57.092313  457249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:57.092334  457249 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:12:57.095414  457249 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:57.095873  457249 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:12:57.095897  457249 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:12:57.096122  457249 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:12:57.096306  457249 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:12:57.096461  457249 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:12:57.096641  457249 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:12:57.177851  457249 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:57.183752  457249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:12:57.199643  457249 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:12:57.199686  457249 api_server.go:166] Checking apiserver status ...
	I0819 19:12:57.199740  457249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:57.214903  457249 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup
	W0819 19:12:57.225788  457249 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:57.225860  457249 ssh_runner.go:195] Run: ls
	I0819 19:12:57.230267  457249 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:12:57.236425  457249 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:12:57.236466  457249 status.go:422] ha-163902 apiserver status = Running (err=<nil>)
	I0819 19:12:57.236478  457249 status.go:257] ha-163902 status: &{Name:ha-163902 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:12:57.236499  457249 status.go:255] checking status of ha-163902-m02 ...
	I0819 19:12:57.236849  457249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:57.236878  457249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:57.252649  457249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42767
	I0819 19:12:57.253150  457249 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:57.253670  457249 main.go:141] libmachine: Using API Version  1
	I0819 19:12:57.253694  457249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:57.254027  457249 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:57.254249  457249 main.go:141] libmachine: (ha-163902-m02) Calling .GetState
	I0819 19:12:57.255968  457249 status.go:330] ha-163902-m02 host status = "Running" (err=<nil>)
	I0819 19:12:57.255986  457249 host.go:66] Checking if "ha-163902-m02" exists ...
	I0819 19:12:57.256373  457249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:57.256400  457249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:57.271810  457249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44703
	I0819 19:12:57.272243  457249 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:57.272699  457249 main.go:141] libmachine: Using API Version  1
	I0819 19:12:57.272724  457249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:57.273040  457249 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:57.273250  457249 main.go:141] libmachine: (ha-163902-m02) Calling .GetIP
	I0819 19:12:57.276336  457249 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:57.276788  457249 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:12:57.276841  457249 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:57.277029  457249 host.go:66] Checking if "ha-163902-m02" exists ...
	I0819 19:12:57.277402  457249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:57.277445  457249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:57.292873  457249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39239
	I0819 19:12:57.293382  457249 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:57.293922  457249 main.go:141] libmachine: Using API Version  1
	I0819 19:12:57.293948  457249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:57.294345  457249 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:57.294565  457249 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:12:57.294790  457249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:12:57.294830  457249 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:12:57.298137  457249 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:57.298623  457249 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:12:57.298656  457249 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:12:57.298814  457249 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:12:57.299024  457249 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:12:57.299242  457249 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:12:57.299373  457249 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa Username:docker}
	W0819 19:13:00.357386  457249 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.162:22: connect: no route to host
	W0819 19:13:00.357504  457249 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E0819 19:13:00.357537  457249 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	I0819 19:13:00.357547  457249 status.go:257] ha-163902-m02 status: &{Name:ha-163902-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 19:13:00.357567  457249 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	I0819 19:13:00.357575  457249 status.go:255] checking status of ha-163902-m03 ...
	I0819 19:13:00.357916  457249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:00.357966  457249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:00.374374  457249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43553
	I0819 19:13:00.374866  457249 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:00.375364  457249 main.go:141] libmachine: Using API Version  1
	I0819 19:13:00.375384  457249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:00.375730  457249 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:00.375945  457249 main.go:141] libmachine: (ha-163902-m03) Calling .GetState
	I0819 19:13:00.377815  457249 status.go:330] ha-163902-m03 host status = "Running" (err=<nil>)
	I0819 19:13:00.377837  457249 host.go:66] Checking if "ha-163902-m03" exists ...
	I0819 19:13:00.378183  457249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:00.378229  457249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:00.394014  457249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39627
	I0819 19:13:00.394523  457249 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:00.395062  457249 main.go:141] libmachine: Using API Version  1
	I0819 19:13:00.395088  457249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:00.395456  457249 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:00.395691  457249 main.go:141] libmachine: (ha-163902-m03) Calling .GetIP
	I0819 19:13:00.398369  457249 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:13:00.398896  457249 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:13:00.398922  457249 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:13:00.399098  457249 host.go:66] Checking if "ha-163902-m03" exists ...
	I0819 19:13:00.399419  457249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:00.399458  457249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:00.415424  457249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0819 19:13:00.415933  457249 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:00.416424  457249 main.go:141] libmachine: Using API Version  1
	I0819 19:13:00.416446  457249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:00.416866  457249 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:00.417120  457249 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:13:00.417337  457249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:13:00.417363  457249 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:13:00.420437  457249 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:13:00.420900  457249 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:13:00.420930  457249 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:13:00.421113  457249 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:13:00.421328  457249 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:13:00.421465  457249 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:13:00.421627  457249 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa Username:docker}
	I0819 19:13:00.500675  457249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:13:00.517103  457249 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:13:00.517152  457249 api_server.go:166] Checking apiserver status ...
	I0819 19:13:00.517193  457249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:00.533594  457249 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0819 19:13:00.543742  457249 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:13:00.543815  457249 ssh_runner.go:195] Run: ls
	I0819 19:13:00.548396  457249 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:13:00.554785  457249 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:13:00.554816  457249 status.go:422] ha-163902-m03 apiserver status = Running (err=<nil>)
	I0819 19:13:00.554826  457249 status.go:257] ha-163902-m03 status: &{Name:ha-163902-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:13:00.554843  457249 status.go:255] checking status of ha-163902-m04 ...
	I0819 19:13:00.555136  457249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:00.555160  457249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:00.570392  457249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39213
	I0819 19:13:00.570871  457249 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:00.571376  457249 main.go:141] libmachine: Using API Version  1
	I0819 19:13:00.571399  457249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:00.571755  457249 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:00.571972  457249 main.go:141] libmachine: (ha-163902-m04) Calling .GetState
	I0819 19:13:00.573778  457249 status.go:330] ha-163902-m04 host status = "Running" (err=<nil>)
	I0819 19:13:00.573798  457249 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:13:00.574139  457249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:00.574167  457249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:00.589685  457249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I0819 19:13:00.590155  457249 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:00.590603  457249 main.go:141] libmachine: Using API Version  1
	I0819 19:13:00.590623  457249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:00.590989  457249 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:00.591203  457249 main.go:141] libmachine: (ha-163902-m04) Calling .GetIP
	I0819 19:13:00.594125  457249 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:13:00.594576  457249 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:09:12 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:13:00.594610  457249 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:13:00.594876  457249 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:13:00.595203  457249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:00.595240  457249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:00.610975  457249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43357
	I0819 19:13:00.611448  457249 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:00.611956  457249 main.go:141] libmachine: Using API Version  1
	I0819 19:13:00.611977  457249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:00.612280  457249 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:00.612493  457249 main.go:141] libmachine: (ha-163902-m04) Calling .DriverName
	I0819 19:13:00.612727  457249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:13:00.612747  457249 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHHostname
	I0819 19:13:00.615823  457249 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:13:00.616210  457249 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:09:12 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:13:00.616246  457249 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:13:00.616442  457249 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHPort
	I0819 19:13:00.616668  457249 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHKeyPath
	I0819 19:13:00.616822  457249 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHUsername
	I0819 19:13:00.616957  457249 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m04/id_rsa Username:docker}
	I0819 19:13:00.696222  457249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:13:00.711783  457249 status.go:257] ha-163902-m04 status: &{Name:ha-163902-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr: exit status 3 (3.736063608s)

                                                
                                                
-- stdout --
	ha-163902
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-163902-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:13:06.852962  457365 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:13:06.853282  457365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:13:06.853293  457365 out.go:358] Setting ErrFile to fd 2...
	I0819 19:13:06.853297  457365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:13:06.853460  457365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:13:06.853635  457365 out.go:352] Setting JSON to false
	I0819 19:13:06.853666  457365 mustload.go:65] Loading cluster: ha-163902
	I0819 19:13:06.853726  457365 notify.go:220] Checking for updates...
	I0819 19:13:06.854186  457365 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:13:06.854207  457365 status.go:255] checking status of ha-163902 ...
	I0819 19:13:06.854679  457365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:06.854751  457365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:06.871188  457365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I0819 19:13:06.871740  457365 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:06.872405  457365 main.go:141] libmachine: Using API Version  1
	I0819 19:13:06.872427  457365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:06.872802  457365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:06.873035  457365 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:13:06.874614  457365 status.go:330] ha-163902 host status = "Running" (err=<nil>)
	I0819 19:13:06.874638  457365 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:13:06.874954  457365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:06.874998  457365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:06.890455  457365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43313
	I0819 19:13:06.890887  457365 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:06.891453  457365 main.go:141] libmachine: Using API Version  1
	I0819 19:13:06.891491  457365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:06.891911  457365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:06.892122  457365 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:13:06.895911  457365 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:13:06.896387  457365 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:13:06.896424  457365 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:13:06.896598  457365 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:13:06.896936  457365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:06.896983  457365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:06.913156  457365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38417
	I0819 19:13:06.913714  457365 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:06.914307  457365 main.go:141] libmachine: Using API Version  1
	I0819 19:13:06.914335  457365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:06.914736  457365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:06.914929  457365 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:13:06.915145  457365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:13:06.915170  457365 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:13:06.918315  457365 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:13:06.918821  457365 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:13:06.918854  457365 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:13:06.919124  457365 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:13:06.919374  457365 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:13:06.919546  457365 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:13:06.919716  457365 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:13:07.000369  457365 ssh_runner.go:195] Run: systemctl --version
	I0819 19:13:07.006653  457365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:13:07.021272  457365 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:13:07.021312  457365 api_server.go:166] Checking apiserver status ...
	I0819 19:13:07.021354  457365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:07.036942  457365 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup
	W0819 19:13:07.048761  457365 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:13:07.048832  457365 ssh_runner.go:195] Run: ls
	I0819 19:13:07.053447  457365 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:13:07.059408  457365 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:13:07.059450  457365 status.go:422] ha-163902 apiserver status = Running (err=<nil>)
	I0819 19:13:07.059464  457365 status.go:257] ha-163902 status: &{Name:ha-163902 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:13:07.059483  457365 status.go:255] checking status of ha-163902-m02 ...
	I0819 19:13:07.059805  457365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:07.059831  457365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:07.076916  457365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40181
	I0819 19:13:07.077506  457365 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:07.078058  457365 main.go:141] libmachine: Using API Version  1
	I0819 19:13:07.078087  457365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:07.078472  457365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:07.078691  457365 main.go:141] libmachine: (ha-163902-m02) Calling .GetState
	I0819 19:13:07.080586  457365 status.go:330] ha-163902-m02 host status = "Running" (err=<nil>)
	I0819 19:13:07.080608  457365 host.go:66] Checking if "ha-163902-m02" exists ...
	I0819 19:13:07.081014  457365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:07.081075  457365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:07.096743  457365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
	I0819 19:13:07.097220  457365 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:07.097724  457365 main.go:141] libmachine: Using API Version  1
	I0819 19:13:07.097749  457365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:07.098118  457365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:07.098362  457365 main.go:141] libmachine: (ha-163902-m02) Calling .GetIP
	I0819 19:13:07.101388  457365 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:13:07.101993  457365 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:13:07.102030  457365 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:13:07.102256  457365 host.go:66] Checking if "ha-163902-m02" exists ...
	I0819 19:13:07.102577  457365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:07.102616  457365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:07.118287  457365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41637
	I0819 19:13:07.118851  457365 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:07.119409  457365 main.go:141] libmachine: Using API Version  1
	I0819 19:13:07.119432  457365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:07.119729  457365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:07.119921  457365 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:13:07.120141  457365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:13:07.120163  457365 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:13:07.123033  457365 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:13:07.123437  457365 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:13:07.123462  457365 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:13:07.123609  457365 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:13:07.123806  457365 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:13:07.123991  457365 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:13:07.124125  457365 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa Username:docker}
	W0819 19:13:10.185377  457365 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.162:22: connect: no route to host
	W0819 19:13:10.185482  457365 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E0819 19:13:10.185496  457365 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	I0819 19:13:10.185504  457365 status.go:257] ha-163902-m02 status: &{Name:ha-163902-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 19:13:10.185526  457365 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	I0819 19:13:10.185541  457365 status.go:255] checking status of ha-163902-m03 ...
	I0819 19:13:10.185848  457365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:10.185890  457365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:10.200961  457365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37883
	I0819 19:13:10.201535  457365 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:10.202054  457365 main.go:141] libmachine: Using API Version  1
	I0819 19:13:10.202076  457365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:10.202467  457365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:10.202653  457365 main.go:141] libmachine: (ha-163902-m03) Calling .GetState
	I0819 19:13:10.204552  457365 status.go:330] ha-163902-m03 host status = "Running" (err=<nil>)
	I0819 19:13:10.204583  457365 host.go:66] Checking if "ha-163902-m03" exists ...
	I0819 19:13:10.204926  457365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:10.204965  457365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:10.221704  457365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0819 19:13:10.222228  457365 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:10.222786  457365 main.go:141] libmachine: Using API Version  1
	I0819 19:13:10.222810  457365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:10.223165  457365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:10.223377  457365 main.go:141] libmachine: (ha-163902-m03) Calling .GetIP
	I0819 19:13:10.226311  457365 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:13:10.226865  457365 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:13:10.226904  457365 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:13:10.227124  457365 host.go:66] Checking if "ha-163902-m03" exists ...
	I0819 19:13:10.227464  457365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:10.227542  457365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:10.243392  457365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37563
	I0819 19:13:10.243857  457365 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:10.244320  457365 main.go:141] libmachine: Using API Version  1
	I0819 19:13:10.244336  457365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:10.244693  457365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:10.244883  457365 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:13:10.245081  457365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:13:10.245102  457365 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:13:10.247841  457365 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:13:10.248193  457365 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:13:10.248221  457365 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:13:10.248367  457365 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:13:10.248569  457365 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:13:10.248730  457365 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:13:10.248881  457365 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa Username:docker}
	I0819 19:13:10.328615  457365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:13:10.343500  457365 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:13:10.343531  457365 api_server.go:166] Checking apiserver status ...
	I0819 19:13:10.343589  457365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:10.357422  457365 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0819 19:13:10.367156  457365 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:13:10.367239  457365 ssh_runner.go:195] Run: ls
	I0819 19:13:10.371731  457365 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:13:10.376110  457365 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:13:10.376148  457365 status.go:422] ha-163902-m03 apiserver status = Running (err=<nil>)
	I0819 19:13:10.376161  457365 status.go:257] ha-163902-m03 status: &{Name:ha-163902-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:13:10.376184  457365 status.go:255] checking status of ha-163902-m04 ...
	I0819 19:13:10.376695  457365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:10.376740  457365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:10.392177  457365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40171
	I0819 19:13:10.392685  457365 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:10.393214  457365 main.go:141] libmachine: Using API Version  1
	I0819 19:13:10.393235  457365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:10.393595  457365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:10.393850  457365 main.go:141] libmachine: (ha-163902-m04) Calling .GetState
	I0819 19:13:10.395591  457365 status.go:330] ha-163902-m04 host status = "Running" (err=<nil>)
	I0819 19:13:10.395610  457365 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:13:10.396010  457365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:10.396055  457365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:10.412557  457365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
	I0819 19:13:10.413020  457365 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:10.413655  457365 main.go:141] libmachine: Using API Version  1
	I0819 19:13:10.413680  457365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:10.414122  457365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:10.414365  457365 main.go:141] libmachine: (ha-163902-m04) Calling .GetIP
	I0819 19:13:10.418626  457365 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:13:10.419259  457365 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:09:12 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:13:10.419293  457365 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:13:10.419487  457365 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:13:10.419826  457365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:10.419877  457365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:10.436007  457365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37425
	I0819 19:13:10.436578  457365 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:10.437233  457365 main.go:141] libmachine: Using API Version  1
	I0819 19:13:10.437256  457365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:10.437625  457365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:10.437796  457365 main.go:141] libmachine: (ha-163902-m04) Calling .DriverName
	I0819 19:13:10.438032  457365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:13:10.438053  457365 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHHostname
	I0819 19:13:10.440753  457365 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:13:10.441161  457365 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:09:12 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:13:10.441189  457365 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:13:10.441381  457365 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHPort
	I0819 19:13:10.441599  457365 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHKeyPath
	I0819 19:13:10.441756  457365 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHUsername
	I0819 19:13:10.441914  457365 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m04/id_rsa Username:docker}
	I0819 19:13:10.526081  457365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:13:10.541286  457365 status.go:257] ha-163902-m04 status: &{Name:ha-163902-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr: exit status 7 (646.873827ms)

                                                
                                                
-- stdout --
	ha-163902
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-163902-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:13:16.636102  457501 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:13:16.636407  457501 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:13:16.636419  457501 out.go:358] Setting ErrFile to fd 2...
	I0819 19:13:16.636424  457501 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:13:16.636668  457501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:13:16.636893  457501 out.go:352] Setting JSON to false
	I0819 19:13:16.636924  457501 mustload.go:65] Loading cluster: ha-163902
	I0819 19:13:16.636988  457501 notify.go:220] Checking for updates...
	I0819 19:13:16.637415  457501 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:13:16.637435  457501 status.go:255] checking status of ha-163902 ...
	I0819 19:13:16.637918  457501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:16.637979  457501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:16.664291  457501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36057
	I0819 19:13:16.664995  457501 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:16.665560  457501 main.go:141] libmachine: Using API Version  1
	I0819 19:13:16.665582  457501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:16.665922  457501 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:16.666123  457501 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:13:16.667924  457501 status.go:330] ha-163902 host status = "Running" (err=<nil>)
	I0819 19:13:16.667943  457501 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:13:16.668231  457501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:16.668300  457501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:16.684127  457501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34097
	I0819 19:13:16.684658  457501 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:16.685174  457501 main.go:141] libmachine: Using API Version  1
	I0819 19:13:16.685201  457501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:16.685539  457501 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:16.685740  457501 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:13:16.688767  457501 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:13:16.689252  457501 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:13:16.689279  457501 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:13:16.689406  457501 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:13:16.689707  457501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:16.689755  457501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:16.707209  457501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41763
	I0819 19:13:16.707674  457501 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:16.708191  457501 main.go:141] libmachine: Using API Version  1
	I0819 19:13:16.708222  457501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:16.708540  457501 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:16.708732  457501 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:13:16.708917  457501 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:13:16.708950  457501 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:13:16.711887  457501 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:13:16.712333  457501 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:13:16.712359  457501 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:13:16.712569  457501 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:13:16.712784  457501 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:13:16.712992  457501 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:13:16.713183  457501 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:13:16.793015  457501 ssh_runner.go:195] Run: systemctl --version
	I0819 19:13:16.804308  457501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:13:16.821420  457501 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:13:16.821459  457501 api_server.go:166] Checking apiserver status ...
	I0819 19:13:16.821522  457501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.835649  457501 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup
	W0819 19:13:16.846263  457501 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:13:16.846347  457501 ssh_runner.go:195] Run: ls
	I0819 19:13:16.851848  457501 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:13:16.857694  457501 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:13:16.857724  457501 status.go:422] ha-163902 apiserver status = Running (err=<nil>)
	I0819 19:13:16.857736  457501 status.go:257] ha-163902 status: &{Name:ha-163902 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:13:16.857755  457501 status.go:255] checking status of ha-163902-m02 ...
	I0819 19:13:16.858091  457501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:16.858120  457501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:16.874450  457501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35333
	I0819 19:13:16.874929  457501 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:16.875476  457501 main.go:141] libmachine: Using API Version  1
	I0819 19:13:16.875513  457501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:16.875875  457501 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:16.876146  457501 main.go:141] libmachine: (ha-163902-m02) Calling .GetState
	I0819 19:13:16.877940  457501 status.go:330] ha-163902-m02 host status = "Stopped" (err=<nil>)
	I0819 19:13:16.877957  457501 status.go:343] host is not running, skipping remaining checks
	I0819 19:13:16.877963  457501 status.go:257] ha-163902-m02 status: &{Name:ha-163902-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:13:16.877984  457501 status.go:255] checking status of ha-163902-m03 ...
	I0819 19:13:16.878311  457501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:16.878342  457501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:16.894184  457501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45627
	I0819 19:13:16.894685  457501 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:16.895167  457501 main.go:141] libmachine: Using API Version  1
	I0819 19:13:16.895190  457501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:16.895499  457501 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:16.895699  457501 main.go:141] libmachine: (ha-163902-m03) Calling .GetState
	I0819 19:13:16.897522  457501 status.go:330] ha-163902-m03 host status = "Running" (err=<nil>)
	I0819 19:13:16.897544  457501 host.go:66] Checking if "ha-163902-m03" exists ...
	I0819 19:13:16.897908  457501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:16.897951  457501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:16.913175  457501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38897
	I0819 19:13:16.913702  457501 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:16.914320  457501 main.go:141] libmachine: Using API Version  1
	I0819 19:13:16.914346  457501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:16.914686  457501 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:16.914942  457501 main.go:141] libmachine: (ha-163902-m03) Calling .GetIP
	I0819 19:13:16.917844  457501 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:13:16.918344  457501 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:13:16.918368  457501 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:13:16.918551  457501 host.go:66] Checking if "ha-163902-m03" exists ...
	I0819 19:13:16.918951  457501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:16.918995  457501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:16.935629  457501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42747
	I0819 19:13:16.936126  457501 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:16.936592  457501 main.go:141] libmachine: Using API Version  1
	I0819 19:13:16.936608  457501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:16.936948  457501 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:16.937169  457501 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:13:16.937390  457501 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:13:16.937412  457501 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:13:16.940416  457501 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:13:16.940910  457501 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:13:16.940939  457501 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:13:16.941150  457501 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:13:16.941379  457501 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:13:16.941584  457501 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:13:16.941787  457501 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa Username:docker}
	I0819 19:13:17.024739  457501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:13:17.040064  457501 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:13:17.040097  457501 api_server.go:166] Checking apiserver status ...
	I0819 19:13:17.040130  457501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:17.054315  457501 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0819 19:13:17.064994  457501 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:13:17.065080  457501 ssh_runner.go:195] Run: ls
	I0819 19:13:17.069572  457501 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:13:17.074047  457501 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:13:17.074075  457501 status.go:422] ha-163902-m03 apiserver status = Running (err=<nil>)
	I0819 19:13:17.074084  457501 status.go:257] ha-163902-m03 status: &{Name:ha-163902-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:13:17.074115  457501 status.go:255] checking status of ha-163902-m04 ...
	I0819 19:13:17.074455  457501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:17.074482  457501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:17.091327  457501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I0819 19:13:17.091865  457501 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:17.092430  457501 main.go:141] libmachine: Using API Version  1
	I0819 19:13:17.092459  457501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:17.092784  457501 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:17.092963  457501 main.go:141] libmachine: (ha-163902-m04) Calling .GetState
	I0819 19:13:17.094690  457501 status.go:330] ha-163902-m04 host status = "Running" (err=<nil>)
	I0819 19:13:17.094713  457501 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:13:17.095147  457501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:17.095187  457501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:17.111617  457501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39855
	I0819 19:13:17.112072  457501 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:17.112636  457501 main.go:141] libmachine: Using API Version  1
	I0819 19:13:17.112654  457501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:17.112986  457501 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:17.113231  457501 main.go:141] libmachine: (ha-163902-m04) Calling .GetIP
	I0819 19:13:17.116410  457501 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:13:17.116799  457501 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:09:12 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:13:17.116844  457501 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:13:17.116987  457501 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:13:17.117323  457501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:17.117365  457501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:17.133424  457501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38611
	I0819 19:13:17.133871  457501 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:17.134380  457501 main.go:141] libmachine: Using API Version  1
	I0819 19:13:17.134428  457501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:17.134821  457501 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:17.135079  457501 main.go:141] libmachine: (ha-163902-m04) Calling .DriverName
	I0819 19:13:17.135301  457501 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:13:17.135323  457501 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHHostname
	I0819 19:13:17.138218  457501 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:13:17.138643  457501 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:09:12 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:13:17.138675  457501 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:13:17.138832  457501 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHPort
	I0819 19:13:17.138996  457501 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHKeyPath
	I0819 19:13:17.139156  457501 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHUsername
	I0819 19:13:17.139300  457501 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m04/id_rsa Username:docker}
	I0819 19:13:17.220277  457501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:13:17.234435  457501 status.go:257] ha-163902-m04 status: &{Name:ha-163902-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr: exit status 7 (621.798302ms)

                                                
                                                
-- stdout --
	ha-163902
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-163902-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:13:27.066897  457605 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:13:27.067331  457605 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:13:27.067365  457605 out.go:358] Setting ErrFile to fd 2...
	I0819 19:13:27.067387  457605 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:13:27.067892  457605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:13:27.068094  457605 out.go:352] Setting JSON to false
	I0819 19:13:27.068124  457605 mustload.go:65] Loading cluster: ha-163902
	I0819 19:13:27.068184  457605 notify.go:220] Checking for updates...
	I0819 19:13:27.068648  457605 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:13:27.068671  457605 status.go:255] checking status of ha-163902 ...
	I0819 19:13:27.069188  457605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:27.069251  457605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:27.085936  457605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39221
	I0819 19:13:27.086458  457605 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:27.087264  457605 main.go:141] libmachine: Using API Version  1
	I0819 19:13:27.087301  457605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:27.087684  457605 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:27.087933  457605 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:13:27.089795  457605 status.go:330] ha-163902 host status = "Running" (err=<nil>)
	I0819 19:13:27.089816  457605 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:13:27.090130  457605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:27.090194  457605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:27.105593  457605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0819 19:13:27.106040  457605 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:27.106602  457605 main.go:141] libmachine: Using API Version  1
	I0819 19:13:27.106626  457605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:27.107004  457605 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:27.107188  457605 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:13:27.110653  457605 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:13:27.111158  457605 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:13:27.111190  457605 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:13:27.111255  457605 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:13:27.111694  457605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:27.111744  457605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:27.128535  457605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0819 19:13:27.128978  457605 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:27.129500  457605 main.go:141] libmachine: Using API Version  1
	I0819 19:13:27.129520  457605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:27.130215  457605 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:27.131304  457605 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:13:27.131643  457605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:13:27.131707  457605 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:13:27.134815  457605 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:13:27.135285  457605 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:13:27.135313  457605 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:13:27.135513  457605 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:13:27.135721  457605 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:13:27.135893  457605 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:13:27.136053  457605 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:13:27.220984  457605 ssh_runner.go:195] Run: systemctl --version
	I0819 19:13:27.227614  457605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:13:27.243473  457605 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:13:27.243512  457605 api_server.go:166] Checking apiserver status ...
	I0819 19:13:27.243549  457605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:27.258039  457605 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup
	W0819 19:13:27.269661  457605 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:13:27.269729  457605 ssh_runner.go:195] Run: ls
	I0819 19:13:27.274449  457605 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:13:27.278735  457605 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:13:27.278764  457605 status.go:422] ha-163902 apiserver status = Running (err=<nil>)
	I0819 19:13:27.278776  457605 status.go:257] ha-163902 status: &{Name:ha-163902 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:13:27.278796  457605 status.go:255] checking status of ha-163902-m02 ...
	I0819 19:13:27.279089  457605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:27.279114  457605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:27.294929  457605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43011
	I0819 19:13:27.295471  457605 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:27.296002  457605 main.go:141] libmachine: Using API Version  1
	I0819 19:13:27.296025  457605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:27.296406  457605 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:27.296645  457605 main.go:141] libmachine: (ha-163902-m02) Calling .GetState
	I0819 19:13:27.298310  457605 status.go:330] ha-163902-m02 host status = "Stopped" (err=<nil>)
	I0819 19:13:27.298325  457605 status.go:343] host is not running, skipping remaining checks
	I0819 19:13:27.298331  457605 status.go:257] ha-163902-m02 status: &{Name:ha-163902-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:13:27.298361  457605 status.go:255] checking status of ha-163902-m03 ...
	I0819 19:13:27.298648  457605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:27.298690  457605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:27.313972  457605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41477
	I0819 19:13:27.314505  457605 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:27.315151  457605 main.go:141] libmachine: Using API Version  1
	I0819 19:13:27.315180  457605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:27.315548  457605 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:27.315763  457605 main.go:141] libmachine: (ha-163902-m03) Calling .GetState
	I0819 19:13:27.317552  457605 status.go:330] ha-163902-m03 host status = "Running" (err=<nil>)
	I0819 19:13:27.317576  457605 host.go:66] Checking if "ha-163902-m03" exists ...
	I0819 19:13:27.317915  457605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:27.317967  457605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:27.334359  457605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34471
	I0819 19:13:27.334834  457605 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:27.335309  457605 main.go:141] libmachine: Using API Version  1
	I0819 19:13:27.335332  457605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:27.335750  457605 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:27.335996  457605 main.go:141] libmachine: (ha-163902-m03) Calling .GetIP
	I0819 19:13:27.339041  457605 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:13:27.339472  457605 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:13:27.339499  457605 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:13:27.339713  457605 host.go:66] Checking if "ha-163902-m03" exists ...
	I0819 19:13:27.340027  457605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:27.340071  457605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:27.355633  457605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40381
	I0819 19:13:27.356074  457605 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:27.356630  457605 main.go:141] libmachine: Using API Version  1
	I0819 19:13:27.356659  457605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:27.356968  457605 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:27.357204  457605 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:13:27.357422  457605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:13:27.357447  457605 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:13:27.360478  457605 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:13:27.361055  457605 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:13:27.361082  457605 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:13:27.361278  457605 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:13:27.361463  457605 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:13:27.361655  457605 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:13:27.361891  457605 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa Username:docker}
	I0819 19:13:27.440410  457605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:13:27.455423  457605 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:13:27.455456  457605 api_server.go:166] Checking apiserver status ...
	I0819 19:13:27.455496  457605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:27.469172  457605 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0819 19:13:27.478661  457605 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:13:27.478725  457605 ssh_runner.go:195] Run: ls
	I0819 19:13:27.482859  457605 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:13:27.487158  457605 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:13:27.487188  457605 status.go:422] ha-163902-m03 apiserver status = Running (err=<nil>)
	I0819 19:13:27.487201  457605 status.go:257] ha-163902-m03 status: &{Name:ha-163902-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:13:27.487227  457605 status.go:255] checking status of ha-163902-m04 ...
	I0819 19:13:27.487545  457605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:27.487577  457605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:27.503561  457605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44977
	I0819 19:13:27.504103  457605 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:27.504590  457605 main.go:141] libmachine: Using API Version  1
	I0819 19:13:27.504612  457605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:27.504903  457605 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:27.505093  457605 main.go:141] libmachine: (ha-163902-m04) Calling .GetState
	I0819 19:13:27.506541  457605 status.go:330] ha-163902-m04 host status = "Running" (err=<nil>)
	I0819 19:13:27.506564  457605 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:13:27.506838  457605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:27.506872  457605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:27.522813  457605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36183
	I0819 19:13:27.523254  457605 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:27.523845  457605 main.go:141] libmachine: Using API Version  1
	I0819 19:13:27.523871  457605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:27.524214  457605 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:27.524436  457605 main.go:141] libmachine: (ha-163902-m04) Calling .GetIP
	I0819 19:13:27.527570  457605 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:13:27.528008  457605 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:09:12 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:13:27.528037  457605 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:13:27.528187  457605 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:13:27.528483  457605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:27.528529  457605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:27.543941  457605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39253
	I0819 19:13:27.544453  457605 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:27.544990  457605 main.go:141] libmachine: Using API Version  1
	I0819 19:13:27.545016  457605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:27.545425  457605 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:27.545635  457605 main.go:141] libmachine: (ha-163902-m04) Calling .DriverName
	I0819 19:13:27.545832  457605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:13:27.545853  457605 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHHostname
	I0819 19:13:27.549018  457605 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:13:27.549498  457605 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:09:12 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:13:27.549540  457605 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:13:27.549672  457605 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHPort
	I0819 19:13:27.549863  457605 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHKeyPath
	I0819 19:13:27.550039  457605 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHUsername
	I0819 19:13:27.550159  457605 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m04/id_rsa Username:docker}
	I0819 19:13:27.628236  457605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:13:27.643084  457605 status.go:257] ha-163902-m04 status: &{Name:ha-163902-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-163902 -n ha-163902
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-163902 logs -n 25: (1.363805533s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m03:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902:/home/docker/cp-test_ha-163902-m03_ha-163902.txt                       |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902 sudo cat                                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m03_ha-163902.txt                                 |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m03:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m02:/home/docker/cp-test_ha-163902-m03_ha-163902-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902-m02 sudo cat                                          | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m03_ha-163902-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m03:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04:/home/docker/cp-test_ha-163902-m03_ha-163902-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902-m04 sudo cat                                          | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m03_ha-163902-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-163902 cp testdata/cp-test.txt                                                | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4137015802/001/cp-test_ha-163902-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902:/home/docker/cp-test_ha-163902-m04_ha-163902.txt                       |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902 sudo cat                                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m04_ha-163902.txt                                 |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m02:/home/docker/cp-test_ha-163902-m04_ha-163902-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902-m02 sudo cat                                          | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m04_ha-163902-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m03:/home/docker/cp-test_ha-163902-m04_ha-163902-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902-m03 sudo cat                                          | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m04_ha-163902-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-163902 node stop m02 -v=7                                                     | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-163902 node start m02 -v=7                                                    | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:05:31
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:05:31.418232  452010 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:05:31.418352  452010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:05:31.418358  452010 out.go:358] Setting ErrFile to fd 2...
	I0819 19:05:31.418362  452010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:05:31.418546  452010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:05:31.419129  452010 out.go:352] Setting JSON to false
	I0819 19:05:31.420120  452010 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10082,"bootTime":1724084249,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:05:31.420188  452010 start.go:139] virtualization: kvm guest
	I0819 19:05:31.422656  452010 out.go:177] * [ha-163902] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:05:31.424219  452010 notify.go:220] Checking for updates...
	I0819 19:05:31.424236  452010 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 19:05:31.425870  452010 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:05:31.427552  452010 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:05:31.429212  452010 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:05:31.430737  452010 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:05:31.432186  452010 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:05:31.433967  452010 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 19:05:31.471459  452010 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 19:05:31.473054  452010 start.go:297] selected driver: kvm2
	I0819 19:05:31.473074  452010 start.go:901] validating driver "kvm2" against <nil>
	I0819 19:05:31.473085  452010 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:05:31.473948  452010 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:05:31.474033  452010 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:05:31.490219  452010 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:05:31.490286  452010 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 19:05:31.490507  452010 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:05:31.490544  452010 cni.go:84] Creating CNI manager for ""
	I0819 19:05:31.490552  452010 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 19:05:31.490558  452010 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 19:05:31.490608  452010 start.go:340] cluster config:
	{Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0819 19:05:31.490706  452010 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:05:31.492897  452010 out.go:177] * Starting "ha-163902" primary control-plane node in "ha-163902" cluster
	I0819 19:05:31.494365  452010 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:05:31.494418  452010 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:05:31.494432  452010 cache.go:56] Caching tarball of preloaded images
	I0819 19:05:31.494530  452010 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:05:31.494540  452010 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 19:05:31.494829  452010 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:05:31.494853  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json: {Name:mkb31c7310cece5f6635574f2a3901077b4ca7df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:05:31.495004  452010 start.go:360] acquireMachinesLock for ha-163902: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:05:31.495032  452010 start.go:364] duration metric: took 15.004µs to acquireMachinesLock for "ha-163902"
	I0819 19:05:31.495049  452010 start.go:93] Provisioning new machine with config: &{Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:05:31.495114  452010 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 19:05:31.496975  452010 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 19:05:31.497125  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:05:31.497232  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:05:31.512292  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46807
	I0819 19:05:31.512869  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:05:31.513604  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:05:31.513630  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:05:31.514037  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:05:31.514230  452010 main.go:141] libmachine: (ha-163902) Calling .GetMachineName
	I0819 19:05:31.514445  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:05:31.514633  452010 start.go:159] libmachine.API.Create for "ha-163902" (driver="kvm2")
	I0819 19:05:31.514661  452010 client.go:168] LocalClient.Create starting
	I0819 19:05:31.514691  452010 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem
	I0819 19:05:31.514723  452010 main.go:141] libmachine: Decoding PEM data...
	I0819 19:05:31.514736  452010 main.go:141] libmachine: Parsing certificate...
	I0819 19:05:31.514784  452010 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem
	I0819 19:05:31.514803  452010 main.go:141] libmachine: Decoding PEM data...
	I0819 19:05:31.514814  452010 main.go:141] libmachine: Parsing certificate...
	I0819 19:05:31.514829  452010 main.go:141] libmachine: Running pre-create checks...
	I0819 19:05:31.514838  452010 main.go:141] libmachine: (ha-163902) Calling .PreCreateCheck
	I0819 19:05:31.515251  452010 main.go:141] libmachine: (ha-163902) Calling .GetConfigRaw
	I0819 19:05:31.515658  452010 main.go:141] libmachine: Creating machine...
	I0819 19:05:31.515672  452010 main.go:141] libmachine: (ha-163902) Calling .Create
	I0819 19:05:31.515803  452010 main.go:141] libmachine: (ha-163902) Creating KVM machine...
	I0819 19:05:31.517120  452010 main.go:141] libmachine: (ha-163902) DBG | found existing default KVM network
	I0819 19:05:31.517965  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:31.517812  452034 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201330}
	I0819 19:05:31.517985  452010 main.go:141] libmachine: (ha-163902) DBG | created network xml: 
	I0819 19:05:31.518000  452010 main.go:141] libmachine: (ha-163902) DBG | <network>
	I0819 19:05:31.518008  452010 main.go:141] libmachine: (ha-163902) DBG |   <name>mk-ha-163902</name>
	I0819 19:05:31.518016  452010 main.go:141] libmachine: (ha-163902) DBG |   <dns enable='no'/>
	I0819 19:05:31.518022  452010 main.go:141] libmachine: (ha-163902) DBG |   
	I0819 19:05:31.518031  452010 main.go:141] libmachine: (ha-163902) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 19:05:31.518039  452010 main.go:141] libmachine: (ha-163902) DBG |     <dhcp>
	I0819 19:05:31.518048  452010 main.go:141] libmachine: (ha-163902) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 19:05:31.518060  452010 main.go:141] libmachine: (ha-163902) DBG |     </dhcp>
	I0819 19:05:31.518072  452010 main.go:141] libmachine: (ha-163902) DBG |   </ip>
	I0819 19:05:31.518083  452010 main.go:141] libmachine: (ha-163902) DBG |   
	I0819 19:05:31.518126  452010 main.go:141] libmachine: (ha-163902) DBG | </network>
	I0819 19:05:31.518164  452010 main.go:141] libmachine: (ha-163902) DBG | 
	I0819 19:05:31.524005  452010 main.go:141] libmachine: (ha-163902) DBG | trying to create private KVM network mk-ha-163902 192.168.39.0/24...
	I0819 19:05:31.603853  452010 main.go:141] libmachine: (ha-163902) Setting up store path in /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902 ...
	I0819 19:05:31.603893  452010 main.go:141] libmachine: (ha-163902) DBG | private KVM network mk-ha-163902 192.168.39.0/24 created
	I0819 19:05:31.603907  452010 main.go:141] libmachine: (ha-163902) Building disk image from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 19:05:31.603928  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:31.603779  452034 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:05:31.603949  452010 main.go:141] libmachine: (ha-163902) Downloading /home/jenkins/minikube-integration/19423-430949/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 19:05:31.884760  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:31.884593  452034 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa...
	I0819 19:05:32.045420  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:32.045265  452034 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/ha-163902.rawdisk...
	I0819 19:05:32.045453  452010 main.go:141] libmachine: (ha-163902) DBG | Writing magic tar header
	I0819 19:05:32.045465  452010 main.go:141] libmachine: (ha-163902) DBG | Writing SSH key tar header
	I0819 19:05:32.045473  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:32.045407  452034 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902 ...
	I0819 19:05:32.045639  452010 main.go:141] libmachine: (ha-163902) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902 (perms=drwx------)
	I0819 19:05:32.045668  452010 main.go:141] libmachine: (ha-163902) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902
	I0819 19:05:32.045678  452010 main.go:141] libmachine: (ha-163902) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines (perms=drwxr-xr-x)
	I0819 19:05:32.045701  452010 main.go:141] libmachine: (ha-163902) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube (perms=drwxr-xr-x)
	I0819 19:05:32.045713  452010 main.go:141] libmachine: (ha-163902) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949 (perms=drwxrwxr-x)
	I0819 19:05:32.045726  452010 main.go:141] libmachine: (ha-163902) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 19:05:32.045738  452010 main.go:141] libmachine: (ha-163902) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 19:05:32.045748  452010 main.go:141] libmachine: (ha-163902) Creating domain...
	I0819 19:05:32.045766  452010 main.go:141] libmachine: (ha-163902) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines
	I0819 19:05:32.045780  452010 main.go:141] libmachine: (ha-163902) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:05:32.045793  452010 main.go:141] libmachine: (ha-163902) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949
	I0819 19:05:32.045807  452010 main.go:141] libmachine: (ha-163902) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 19:05:32.045818  452010 main.go:141] libmachine: (ha-163902) DBG | Checking permissions on dir: /home/jenkins
	I0819 19:05:32.045828  452010 main.go:141] libmachine: (ha-163902) DBG | Checking permissions on dir: /home
	I0819 19:05:32.045838  452010 main.go:141] libmachine: (ha-163902) DBG | Skipping /home - not owner
	I0819 19:05:32.047042  452010 main.go:141] libmachine: (ha-163902) define libvirt domain using xml: 
	I0819 19:05:32.047066  452010 main.go:141] libmachine: (ha-163902) <domain type='kvm'>
	I0819 19:05:32.047074  452010 main.go:141] libmachine: (ha-163902)   <name>ha-163902</name>
	I0819 19:05:32.047081  452010 main.go:141] libmachine: (ha-163902)   <memory unit='MiB'>2200</memory>
	I0819 19:05:32.047116  452010 main.go:141] libmachine: (ha-163902)   <vcpu>2</vcpu>
	I0819 19:05:32.047138  452010 main.go:141] libmachine: (ha-163902)   <features>
	I0819 19:05:32.047170  452010 main.go:141] libmachine: (ha-163902)     <acpi/>
	I0819 19:05:32.047193  452010 main.go:141] libmachine: (ha-163902)     <apic/>
	I0819 19:05:32.047207  452010 main.go:141] libmachine: (ha-163902)     <pae/>
	I0819 19:05:32.047217  452010 main.go:141] libmachine: (ha-163902)     
	I0819 19:05:32.047222  452010 main.go:141] libmachine: (ha-163902)   </features>
	I0819 19:05:32.047231  452010 main.go:141] libmachine: (ha-163902)   <cpu mode='host-passthrough'>
	I0819 19:05:32.047235  452010 main.go:141] libmachine: (ha-163902)   
	I0819 19:05:32.047245  452010 main.go:141] libmachine: (ha-163902)   </cpu>
	I0819 19:05:32.047251  452010 main.go:141] libmachine: (ha-163902)   <os>
	I0819 19:05:32.047259  452010 main.go:141] libmachine: (ha-163902)     <type>hvm</type>
	I0819 19:05:32.047272  452010 main.go:141] libmachine: (ha-163902)     <boot dev='cdrom'/>
	I0819 19:05:32.047282  452010 main.go:141] libmachine: (ha-163902)     <boot dev='hd'/>
	I0819 19:05:32.047298  452010 main.go:141] libmachine: (ha-163902)     <bootmenu enable='no'/>
	I0819 19:05:32.047314  452010 main.go:141] libmachine: (ha-163902)   </os>
	I0819 19:05:32.047327  452010 main.go:141] libmachine: (ha-163902)   <devices>
	I0819 19:05:32.047338  452010 main.go:141] libmachine: (ha-163902)     <disk type='file' device='cdrom'>
	I0819 19:05:32.047355  452010 main.go:141] libmachine: (ha-163902)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/boot2docker.iso'/>
	I0819 19:05:32.047365  452010 main.go:141] libmachine: (ha-163902)       <target dev='hdc' bus='scsi'/>
	I0819 19:05:32.047378  452010 main.go:141] libmachine: (ha-163902)       <readonly/>
	I0819 19:05:32.047389  452010 main.go:141] libmachine: (ha-163902)     </disk>
	I0819 19:05:32.047402  452010 main.go:141] libmachine: (ha-163902)     <disk type='file' device='disk'>
	I0819 19:05:32.047414  452010 main.go:141] libmachine: (ha-163902)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 19:05:32.047427  452010 main.go:141] libmachine: (ha-163902)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/ha-163902.rawdisk'/>
	I0819 19:05:32.047437  452010 main.go:141] libmachine: (ha-163902)       <target dev='hda' bus='virtio'/>
	I0819 19:05:32.047445  452010 main.go:141] libmachine: (ha-163902)     </disk>
	I0819 19:05:32.047455  452010 main.go:141] libmachine: (ha-163902)     <interface type='network'>
	I0819 19:05:32.047477  452010 main.go:141] libmachine: (ha-163902)       <source network='mk-ha-163902'/>
	I0819 19:05:32.047489  452010 main.go:141] libmachine: (ha-163902)       <model type='virtio'/>
	I0819 19:05:32.047496  452010 main.go:141] libmachine: (ha-163902)     </interface>
	I0819 19:05:32.047501  452010 main.go:141] libmachine: (ha-163902)     <interface type='network'>
	I0819 19:05:32.047509  452010 main.go:141] libmachine: (ha-163902)       <source network='default'/>
	I0819 19:05:32.047513  452010 main.go:141] libmachine: (ha-163902)       <model type='virtio'/>
	I0819 19:05:32.047521  452010 main.go:141] libmachine: (ha-163902)     </interface>
	I0819 19:05:32.047525  452010 main.go:141] libmachine: (ha-163902)     <serial type='pty'>
	I0819 19:05:32.047530  452010 main.go:141] libmachine: (ha-163902)       <target port='0'/>
	I0819 19:05:32.047537  452010 main.go:141] libmachine: (ha-163902)     </serial>
	I0819 19:05:32.047542  452010 main.go:141] libmachine: (ha-163902)     <console type='pty'>
	I0819 19:05:32.047551  452010 main.go:141] libmachine: (ha-163902)       <target type='serial' port='0'/>
	I0819 19:05:32.047564  452010 main.go:141] libmachine: (ha-163902)     </console>
	I0819 19:05:32.047575  452010 main.go:141] libmachine: (ha-163902)     <rng model='virtio'>
	I0819 19:05:32.047592  452010 main.go:141] libmachine: (ha-163902)       <backend model='random'>/dev/random</backend>
	I0819 19:05:32.047605  452010 main.go:141] libmachine: (ha-163902)     </rng>
	I0819 19:05:32.047613  452010 main.go:141] libmachine: (ha-163902)     
	I0819 19:05:32.047630  452010 main.go:141] libmachine: (ha-163902)     
	I0819 19:05:32.047642  452010 main.go:141] libmachine: (ha-163902)   </devices>
	I0819 19:05:32.047652  452010 main.go:141] libmachine: (ha-163902) </domain>
	I0819 19:05:32.047663  452010 main.go:141] libmachine: (ha-163902) 
	I0819 19:05:32.052511  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:6b:f0:f7 in network default
	I0819 19:05:32.053337  452010 main.go:141] libmachine: (ha-163902) Ensuring networks are active...
	I0819 19:05:32.053362  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:32.054093  452010 main.go:141] libmachine: (ha-163902) Ensuring network default is active
	I0819 19:05:32.054399  452010 main.go:141] libmachine: (ha-163902) Ensuring network mk-ha-163902 is active
	I0819 19:05:32.054895  452010 main.go:141] libmachine: (ha-163902) Getting domain xml...
	I0819 19:05:32.055541  452010 main.go:141] libmachine: (ha-163902) Creating domain...
	I0819 19:05:33.295790  452010 main.go:141] libmachine: (ha-163902) Waiting to get IP...
	I0819 19:05:33.296639  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:33.297086  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:33.297153  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:33.297073  452034 retry.go:31] will retry after 235.373593ms: waiting for machine to come up
	I0819 19:05:33.534776  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:33.535248  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:33.535276  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:33.535203  452034 retry.go:31] will retry after 372.031549ms: waiting for machine to come up
	I0819 19:05:33.908862  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:33.909298  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:33.909329  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:33.909258  452034 retry.go:31] will retry after 461.573677ms: waiting for machine to come up
	I0819 19:05:34.373270  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:34.373854  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:34.373878  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:34.373800  452034 retry.go:31] will retry after 374.272193ms: waiting for machine to come up
	I0819 19:05:34.749561  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:34.750084  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:34.750118  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:34.750021  452034 retry.go:31] will retry after 678.038494ms: waiting for machine to come up
	I0819 19:05:35.429875  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:35.430266  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:35.430297  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:35.430221  452034 retry.go:31] will retry after 797.074334ms: waiting for machine to come up
	I0819 19:05:36.229400  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:36.229868  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:36.229957  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:36.229817  452034 retry.go:31] will retry after 1.092014853s: waiting for machine to come up
	I0819 19:05:37.323998  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:37.324515  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:37.324545  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:37.324450  452034 retry.go:31] will retry after 1.272539267s: waiting for machine to come up
	I0819 19:05:38.599242  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:38.599875  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:38.599904  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:38.599824  452034 retry.go:31] will retry after 1.464855471s: waiting for machine to come up
	I0819 19:05:40.066143  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:40.066660  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:40.066688  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:40.066595  452034 retry.go:31] will retry after 1.829451481s: waiting for machine to come up
	I0819 19:05:41.897944  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:41.898352  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:41.898384  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:41.898295  452034 retry.go:31] will retry after 2.819732082s: waiting for machine to come up
	I0819 19:05:44.719420  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:44.719862  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:44.719886  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:44.719819  452034 retry.go:31] will retry after 2.733084141s: waiting for machine to come up
	I0819 19:05:47.454272  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:47.454861  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:47.454890  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:47.454791  452034 retry.go:31] will retry after 3.235083135s: waiting for machine to come up
	I0819 19:05:50.693380  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:50.693783  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find current IP address of domain ha-163902 in network mk-ha-163902
	I0819 19:05:50.693816  452010 main.go:141] libmachine: (ha-163902) DBG | I0819 19:05:50.693726  452034 retry.go:31] will retry after 4.687824547s: waiting for machine to come up
	I0819 19:05:55.385601  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.386026  452010 main.go:141] libmachine: (ha-163902) Found IP for machine: 192.168.39.227
	I0819 19:05:55.386051  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has current primary IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.386058  452010 main.go:141] libmachine: (ha-163902) Reserving static IP address...
	I0819 19:05:55.386385  452010 main.go:141] libmachine: (ha-163902) DBG | unable to find host DHCP lease matching {name: "ha-163902", mac: "52:54:00:57:94:b4", ip: "192.168.39.227"} in network mk-ha-163902
	I0819 19:05:55.470803  452010 main.go:141] libmachine: (ha-163902) DBG | Getting to WaitForSSH function...
	I0819 19:05:55.470832  452010 main.go:141] libmachine: (ha-163902) Reserved static IP address: 192.168.39.227
	I0819 19:05:55.470842  452010 main.go:141] libmachine: (ha-163902) Waiting for SSH to be available...
	I0819 19:05:55.473458  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.473843  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:55.473867  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.474095  452010 main.go:141] libmachine: (ha-163902) DBG | Using SSH client type: external
	I0819 19:05:55.474116  452010 main.go:141] libmachine: (ha-163902) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa (-rw-------)
	I0819 19:05:55.474172  452010 main.go:141] libmachine: (ha-163902) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:05:55.474194  452010 main.go:141] libmachine: (ha-163902) DBG | About to run SSH command:
	I0819 19:05:55.474208  452010 main.go:141] libmachine: (ha-163902) DBG | exit 0
	I0819 19:05:55.597283  452010 main.go:141] libmachine: (ha-163902) DBG | SSH cmd err, output: <nil>: 
	I0819 19:05:55.597559  452010 main.go:141] libmachine: (ha-163902) KVM machine creation complete!
	I0819 19:05:55.597983  452010 main.go:141] libmachine: (ha-163902) Calling .GetConfigRaw
	I0819 19:05:55.598555  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:05:55.598774  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:05:55.598943  452010 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 19:05:55.598968  452010 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:05:55.600319  452010 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 19:05:55.600346  452010 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 19:05:55.600352  452010 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 19:05:55.600358  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:55.602646  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.603087  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:55.603119  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.603245  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:55.603449  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:55.603621  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:55.603759  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:55.603945  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:05:55.604156  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:05:55.604168  452010 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 19:05:55.704470  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:05:55.704499  452010 main.go:141] libmachine: Detecting the provisioner...
	I0819 19:05:55.704510  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:55.707585  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.708003  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:55.708018  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.708210  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:55.708434  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:55.708619  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:55.708789  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:55.708997  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:05:55.709234  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:05:55.709250  452010 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 19:05:55.814037  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 19:05:55.814129  452010 main.go:141] libmachine: found compatible host: buildroot
	I0819 19:05:55.814143  452010 main.go:141] libmachine: Provisioning with buildroot...
	I0819 19:05:55.814151  452010 main.go:141] libmachine: (ha-163902) Calling .GetMachineName
	I0819 19:05:55.814414  452010 buildroot.go:166] provisioning hostname "ha-163902"
	I0819 19:05:55.814443  452010 main.go:141] libmachine: (ha-163902) Calling .GetMachineName
	I0819 19:05:55.814730  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:55.817631  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.817991  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:55.818014  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.818198  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:55.818407  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:55.818543  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:55.818666  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:55.818844  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:05:55.819030  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:05:55.819042  452010 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-163902 && echo "ha-163902" | sudo tee /etc/hostname
	I0819 19:05:55.936372  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-163902
	
	I0819 19:05:55.936408  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:55.939125  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.939548  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:55.939576  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:55.939755  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:55.939961  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:55.940154  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:55.940278  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:55.940417  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:05:55.940629  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:05:55.940652  452010 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-163902' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-163902/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-163902' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:05:56.049627  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:05:56.049665  452010 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 19:05:56.049694  452010 buildroot.go:174] setting up certificates
	I0819 19:05:56.049709  452010 provision.go:84] configureAuth start
	I0819 19:05:56.049724  452010 main.go:141] libmachine: (ha-163902) Calling .GetMachineName
	I0819 19:05:56.050048  452010 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:05:56.052735  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.053044  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.053078  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.053336  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:56.055736  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.056089  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.056117  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.056279  452010 provision.go:143] copyHostCerts
	I0819 19:05:56.056312  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:05:56.056346  452010 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 19:05:56.056364  452010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:05:56.056431  452010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 19:05:56.056563  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:05:56.056581  452010 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 19:05:56.056586  452010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:05:56.056612  452010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 19:05:56.056656  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:05:56.056675  452010 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 19:05:56.056678  452010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:05:56.056698  452010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 19:05:56.056741  452010 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.ha-163902 san=[127.0.0.1 192.168.39.227 ha-163902 localhost minikube]
	I0819 19:05:56.321863  452010 provision.go:177] copyRemoteCerts
	I0819 19:05:56.321953  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:05:56.321981  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:56.325000  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.325450  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.325486  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.325716  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:56.325967  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:56.326145  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:56.326344  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:05:56.407242  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 19:05:56.407315  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:05:56.432002  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 19:05:56.432072  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:05:56.455580  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 19:05:56.455641  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0819 19:05:56.479145  452010 provision.go:87] duration metric: took 429.421483ms to configureAuth
	I0819 19:05:56.479183  452010 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:05:56.479390  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:05:56.479512  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:56.482475  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.482826  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.482857  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.483040  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:56.483280  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:56.483461  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:56.483617  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:56.483794  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:05:56.483982  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:05:56.484004  452010 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:05:56.736096  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:05:56.736129  452010 main.go:141] libmachine: Checking connection to Docker...
	I0819 19:05:56.736141  452010 main.go:141] libmachine: (ha-163902) Calling .GetURL
	I0819 19:05:56.737376  452010 main.go:141] libmachine: (ha-163902) DBG | Using libvirt version 6000000
	I0819 19:05:56.739791  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.740149  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.740176  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.740410  452010 main.go:141] libmachine: Docker is up and running!
	I0819 19:05:56.740428  452010 main.go:141] libmachine: Reticulating splines...
	I0819 19:05:56.740437  452010 client.go:171] duration metric: took 25.225767843s to LocalClient.Create
	I0819 19:05:56.740467  452010 start.go:167] duration metric: took 25.225834543s to libmachine.API.Create "ha-163902"
	I0819 19:05:56.740480  452010 start.go:293] postStartSetup for "ha-163902" (driver="kvm2")
	I0819 19:05:56.740493  452010 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:05:56.740508  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:05:56.740744  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:05:56.740769  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:56.743112  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.743433  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.743462  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.743701  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:56.743894  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:56.744047  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:56.744157  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:05:56.823418  452010 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:05:56.827918  452010 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:05:56.827953  452010 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 19:05:56.828030  452010 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 19:05:56.828115  452010 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 19:05:56.828127  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /etc/ssl/certs/4381592.pem
	I0819 19:05:56.828222  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:05:56.837576  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:05:56.861282  452010 start.go:296] duration metric: took 120.784925ms for postStartSetup
	I0819 19:05:56.861343  452010 main.go:141] libmachine: (ha-163902) Calling .GetConfigRaw
	I0819 19:05:56.862005  452010 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:05:56.864830  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.865251  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.865277  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.865556  452010 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:05:56.865749  452010 start.go:128] duration metric: took 25.370624874s to createHost
	I0819 19:05:56.865772  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:56.868179  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.868523  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.868556  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.868743  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:56.868961  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:56.869123  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:56.869266  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:56.869432  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:05:56.869640  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:05:56.869659  452010 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:05:56.969674  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094356.940579657
	
	I0819 19:05:56.969700  452010 fix.go:216] guest clock: 1724094356.940579657
	I0819 19:05:56.969709  452010 fix.go:229] Guest: 2024-08-19 19:05:56.940579657 +0000 UTC Remote: 2024-08-19 19:05:56.865761238 +0000 UTC m=+25.484677957 (delta=74.818419ms)
	I0819 19:05:56.969731  452010 fix.go:200] guest clock delta is within tolerance: 74.818419ms
	I0819 19:05:56.969737  452010 start.go:83] releasing machines lock for "ha-163902", held for 25.474696847s
	I0819 19:05:56.969759  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:05:56.970089  452010 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:05:56.972736  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.973089  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.973119  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.973315  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:05:56.973836  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:05:56.974016  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:05:56.974079  452010 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:05:56.974130  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:56.974231  452010 ssh_runner.go:195] Run: cat /version.json
	I0819 19:05:56.974252  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:05:56.976706  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.976836  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.977227  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.977255  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.977376  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:56.977406  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:56.977410  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:56.977569  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:05:56.977645  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:56.977754  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:05:56.977900  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:56.977964  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:05:56.978038  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:05:56.978099  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:05:57.074642  452010 ssh_runner.go:195] Run: systemctl --version
	I0819 19:05:57.080599  452010 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:05:57.240792  452010 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:05:57.246582  452010 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:05:57.246671  452010 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:05:57.262756  452010 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:05:57.262810  452010 start.go:495] detecting cgroup driver to use...
	I0819 19:05:57.262888  452010 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:05:57.279672  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:05:57.294164  452010 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:05:57.294248  452010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:05:57.308353  452010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:05:57.322390  452010 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:05:57.436990  452010 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:05:57.602038  452010 docker.go:233] disabling docker service ...
	I0819 19:05:57.602118  452010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:05:57.616232  452010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:05:57.629871  452010 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:05:57.755386  452010 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:05:57.872386  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:05:57.886357  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:05:57.904738  452010 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:05:57.904798  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:05:57.915183  452010 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:05:57.915262  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:05:57.925467  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:05:57.935604  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:05:57.946343  452010 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:05:57.957039  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:05:57.967274  452010 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:05:57.984485  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:05:57.994896  452010 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:05:58.004202  452010 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:05:58.004275  452010 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:05:58.016953  452010 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:05:58.026601  452010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:05:58.143400  452010 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:05:58.277062  452010 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:05:58.277167  452010 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:05:58.281828  452010 start.go:563] Will wait 60s for crictl version
	I0819 19:05:58.281896  452010 ssh_runner.go:195] Run: which crictl
	I0819 19:05:58.285555  452010 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:05:58.321545  452010 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:05:58.321626  452010 ssh_runner.go:195] Run: crio --version
	I0819 19:05:58.348350  452010 ssh_runner.go:195] Run: crio --version
	I0819 19:05:58.378204  452010 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:05:58.379348  452010 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:05:58.381908  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:58.382305  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:05:58.382329  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:05:58.382563  452010 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 19:05:58.386764  452010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:05:58.399156  452010 kubeadm.go:883] updating cluster {Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:05:58.399272  452010 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:05:58.399332  452010 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:05:58.431678  452010 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:05:58.431751  452010 ssh_runner.go:195] Run: which lz4
	I0819 19:05:58.435341  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0819 19:05:58.435440  452010 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:05:58.439403  452010 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:05:58.439438  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 19:05:59.728799  452010 crio.go:462] duration metric: took 1.29338158s to copy over tarball
	I0819 19:05:59.728897  452010 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:06:01.890799  452010 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.161871811s)
	I0819 19:06:01.890832  452010 crio.go:469] duration metric: took 2.16199361s to extract the tarball
	I0819 19:06:01.890843  452010 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:06:01.929394  452010 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:06:01.976632  452010 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:06:01.976655  452010 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:06:01.976664  452010 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.31.0 crio true true} ...
	I0819 19:06:01.976785  452010 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-163902 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:06:01.976874  452010 ssh_runner.go:195] Run: crio config
	I0819 19:06:02.031929  452010 cni.go:84] Creating CNI manager for ""
	I0819 19:06:02.031959  452010 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 19:06:02.031971  452010 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:06:02.032002  452010 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-163902 NodeName:ha-163902 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:06:02.032186  452010 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-163902"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:06:02.032220  452010 kube-vip.go:115] generating kube-vip config ...
	I0819 19:06:02.032296  452010 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 19:06:02.047887  452010 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 19:06:02.048023  452010 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0819 19:06:02.048094  452010 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:06:02.057959  452010 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:06:02.058049  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 19:06:02.067968  452010 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0819 19:06:02.084960  452010 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:06:02.101596  452010 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0819 19:06:02.118401  452010 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0819 19:06:02.135157  452010 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 19:06:02.139038  452010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:06:02.151277  452010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:06:02.287982  452010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:06:02.305693  452010 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902 for IP: 192.168.39.227
	I0819 19:06:02.305726  452010 certs.go:194] generating shared ca certs ...
	I0819 19:06:02.305746  452010 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:02.305908  452010 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 19:06:02.305988  452010 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 19:06:02.306003  452010 certs.go:256] generating profile certs ...
	I0819 19:06:02.306073  452010 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.key
	I0819 19:06:02.306104  452010 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.crt with IP's: []
	I0819 19:06:02.433694  452010 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.crt ...
	I0819 19:06:02.433730  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.crt: {Name:mk8bfdedc79175fd65d664bd895dabaee1f5368d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:02.433947  452010 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.key ...
	I0819 19:06:02.433970  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.key: {Name:mk9c72d09ffba4dd19fb35a4717d614fa3a0d869 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:02.434070  452010 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.c22e295a
	I0819 19:06:02.434086  452010 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.c22e295a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227 192.168.39.254]
	I0819 19:06:02.490434  452010 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.c22e295a ...
	I0819 19:06:02.490465  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.c22e295a: {Name:mkb54a1fb887f906a05ab935bff349329bc82beb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:02.490630  452010 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.c22e295a ...
	I0819 19:06:02.490651  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.c22e295a: {Name:mka31659a1a74ea6e771829c2dff31e6afb34975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:02.490719  452010 certs.go:381] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.c22e295a -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt
	I0819 19:06:02.490797  452010 certs.go:385] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.c22e295a -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key
	I0819 19:06:02.490850  452010 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key
	I0819 19:06:02.490865  452010 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt with IP's: []
	I0819 19:06:02.628360  452010 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt ...
	I0819 19:06:02.628394  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt: {Name:mkc9cf581f37f8a743e563825e7e50273a0a4f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:02.628561  452010 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key ...
	I0819 19:06:02.628571  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key: {Name:mke69930ded311f6c6a36cae8ec6b8af054e66cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:02.628639  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 19:06:02.628655  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 19:06:02.628666  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 19:06:02.628678  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 19:06:02.628691  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 19:06:02.628701  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 19:06:02.628713  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 19:06:02.628727  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 19:06:02.628774  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 19:06:02.628812  452010 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 19:06:02.628823  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 19:06:02.628846  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:06:02.628869  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:06:02.628891  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 19:06:02.628928  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:06:02.628952  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem -> /usr/share/ca-certificates/438159.pem
	I0819 19:06:02.628967  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /usr/share/ca-certificates/4381592.pem
	I0819 19:06:02.628979  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:06:02.629578  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:06:02.654309  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:06:02.678599  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:06:02.703589  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 19:06:02.728503  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 19:06:02.753019  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:06:02.777539  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:06:02.802262  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:06:02.829634  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 19:06:02.856657  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 19:06:02.883381  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:06:02.907290  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:06:02.924124  452010 ssh_runner.go:195] Run: openssl version
	I0819 19:06:02.929964  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 19:06:02.940968  452010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 19:06:02.945479  452010 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 19:06:02.945560  452010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 19:06:02.951359  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 19:06:02.962047  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 19:06:02.973287  452010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 19:06:02.977977  452010 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 19:06:02.978049  452010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 19:06:02.983944  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:06:02.995228  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:06:03.006949  452010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:06:03.011870  452010 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:06:03.011955  452010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:06:03.017883  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:06:03.029106  452010 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:06:03.033337  452010 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 19:06:03.033404  452010 kubeadm.go:392] StartCluster: {Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:06:03.033495  452010 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:06:03.033577  452010 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:06:03.088306  452010 cri.go:89] found id: ""
	I0819 19:06:03.088379  452010 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:06:03.100633  452010 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:06:03.114308  452010 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:06:03.124116  452010 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:06:03.124139  452010 kubeadm.go:157] found existing configuration files:
	
	I0819 19:06:03.124185  452010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:06:03.133765  452010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:06:03.133877  452010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:06:03.143684  452010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:06:03.152949  452010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:06:03.153013  452010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:06:03.162852  452010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:06:03.172125  452010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:06:03.172206  452010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:06:03.182018  452010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:06:03.191319  452010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:06:03.191392  452010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:06:03.201176  452010 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:06:03.299118  452010 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 19:06:03.299249  452010 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:06:03.407542  452010 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:06:03.407664  452010 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:06:03.407777  452010 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 19:06:03.422769  452010 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:06:03.483698  452010 out.go:235]   - Generating certificates and keys ...
	I0819 19:06:03.483806  452010 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:06:03.483931  452010 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:06:03.553846  452010 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 19:06:03.736844  452010 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 19:06:03.949345  452010 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 19:06:04.058381  452010 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 19:06:04.276348  452010 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 19:06:04.276498  452010 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-163902 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0819 19:06:04.358230  452010 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 19:06:04.358465  452010 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-163902 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0819 19:06:04.658298  452010 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 19:06:04.771768  452010 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 19:06:05.013848  452010 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 19:06:05.014067  452010 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:06:05.101434  452010 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:06:05.147075  452010 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 19:06:05.335609  452010 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:06:05.577326  452010 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:06:05.782050  452010 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:06:05.782720  452010 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:06:05.786237  452010 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:06:05.857880  452010 out.go:235]   - Booting up control plane ...
	I0819 19:06:05.858076  452010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:06:05.858171  452010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:06:05.858294  452010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:06:05.858447  452010 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:06:05.858596  452010 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:06:05.858664  452010 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:06:05.959353  452010 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 19:06:05.959485  452010 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 19:06:06.964584  452010 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005758603s
	I0819 19:06:06.964714  452010 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 19:06:12.643972  452010 kubeadm.go:310] [api-check] The API server is healthy after 5.680423764s
	I0819 19:06:12.655198  452010 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 19:06:12.674235  452010 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 19:06:12.706731  452010 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 19:06:12.706972  452010 kubeadm.go:310] [mark-control-plane] Marking the node ha-163902 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 19:06:12.719625  452010 kubeadm.go:310] [bootstrap-token] Using token: ydvj8p.1o1g0g4n7744ocvt
	I0819 19:06:12.720986  452010 out.go:235]   - Configuring RBAC rules ...
	I0819 19:06:12.721175  452010 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 19:06:12.728254  452010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 19:06:12.737165  452010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 19:06:12.748696  452010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 19:06:12.753193  452010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 19:06:12.757577  452010 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 19:06:13.049055  452010 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 19:06:13.490560  452010 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 19:06:14.050301  452010 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 19:06:14.052795  452010 kubeadm.go:310] 
	I0819 19:06:14.052880  452010 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 19:06:14.052897  452010 kubeadm.go:310] 
	I0819 19:06:14.052996  452010 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 19:06:14.053006  452010 kubeadm.go:310] 
	I0819 19:06:14.053039  452010 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 19:06:14.053161  452010 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 19:06:14.053238  452010 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 19:06:14.053248  452010 kubeadm.go:310] 
	I0819 19:06:14.053326  452010 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 19:06:14.053338  452010 kubeadm.go:310] 
	I0819 19:06:14.053393  452010 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 19:06:14.053407  452010 kubeadm.go:310] 
	I0819 19:06:14.053480  452010 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 19:06:14.053583  452010 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 19:06:14.053646  452010 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 19:06:14.053652  452010 kubeadm.go:310] 
	I0819 19:06:14.053730  452010 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 19:06:14.053804  452010 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 19:06:14.053810  452010 kubeadm.go:310] 
	I0819 19:06:14.053883  452010 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ydvj8p.1o1g0g4n7744ocvt \
	I0819 19:06:14.053968  452010 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f \
	I0819 19:06:14.053990  452010 kubeadm.go:310] 	--control-plane 
	I0819 19:06:14.053996  452010 kubeadm.go:310] 
	I0819 19:06:14.054071  452010 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 19:06:14.054077  452010 kubeadm.go:310] 
	I0819 19:06:14.054147  452010 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ydvj8p.1o1g0g4n7744ocvt \
	I0819 19:06:14.054243  452010 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f 
	I0819 19:06:14.055816  452010 kubeadm.go:310] W0819 19:06:03.271368     848 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:06:14.056128  452010 kubeadm.go:310] W0819 19:06:03.272125     848 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:06:14.056225  452010 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:06:14.056250  452010 cni.go:84] Creating CNI manager for ""
	I0819 19:06:14.056259  452010 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 19:06:14.057946  452010 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 19:06:14.059253  452010 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 19:06:14.064807  452010 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 19:06:14.064830  452010 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 19:06:14.086444  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 19:06:14.506238  452010 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:06:14.506388  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:06:14.506381  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-163902 minikube.k8s.io/updated_at=2024_08_19T19_06_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8 minikube.k8s.io/name=ha-163902 minikube.k8s.io/primary=true
	I0819 19:06:14.727144  452010 ops.go:34] apiserver oom_adj: -16
	I0819 19:06:14.727339  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:06:15.228325  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:06:15.727767  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:06:16.228014  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:06:16.727822  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:06:17.227601  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:06:17.727580  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:06:17.850730  452010 kubeadm.go:1113] duration metric: took 3.344418538s to wait for elevateKubeSystemPrivileges
	I0819 19:06:17.850766  452010 kubeadm.go:394] duration metric: took 14.817365401s to StartCluster
	I0819 19:06:17.850791  452010 settings.go:142] acquiring lock: {Name:mkc71def6be9966e5f008a22161bf3ed26f482b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:17.850881  452010 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:06:17.851520  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/kubeconfig: {Name:mk1ae3aa741bb2460fc0d48f80867b991b4a0677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:17.851775  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 19:06:17.851803  452010 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:06:17.851867  452010 addons.go:69] Setting storage-provisioner=true in profile "ha-163902"
	I0819 19:06:17.851769  452010 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:06:17.851899  452010 addons.go:234] Setting addon storage-provisioner=true in "ha-163902"
	I0819 19:06:17.851910  452010 start.go:241] waiting for startup goroutines ...
	I0819 19:06:17.851921  452010 addons.go:69] Setting default-storageclass=true in profile "ha-163902"
	I0819 19:06:17.851928  452010 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:06:17.851957  452010 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-163902"
	I0819 19:06:17.851957  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:06:17.852317  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:06:17.852347  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:06:17.852660  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:06:17.852801  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:06:17.868850  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33535
	I0819 19:06:17.869366  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:06:17.869959  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:06:17.869987  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:06:17.870339  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:06:17.870826  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:06:17.870851  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:06:17.874949  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44305
	I0819 19:06:17.875430  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:06:17.875975  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:06:17.876002  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:06:17.876402  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:06:17.876641  452010 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:06:17.879064  452010 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:06:17.879313  452010 kapi.go:59] client config for ha-163902: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.key", CAFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 19:06:17.879823  452010 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 19:06:17.880007  452010 addons.go:234] Setting addon default-storageclass=true in "ha-163902"
	I0819 19:06:17.880044  452010 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:06:17.880336  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:06:17.880377  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:06:17.888108  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38721
	I0819 19:06:17.888682  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:06:17.889289  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:06:17.889315  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:06:17.889785  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:06:17.890009  452010 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:06:17.892185  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:06:17.894187  452010 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:06:17.895505  452010 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:06:17.895532  452010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:06:17.895558  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:06:17.898400  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46849
	I0819 19:06:17.898852  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:06:17.899075  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:06:17.899443  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:06:17.899459  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:06:17.899525  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:06:17.899543  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:06:17.899765  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:06:17.899881  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:06:17.899946  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:06:17.900078  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:06:17.900196  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:06:17.900478  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:06:17.900525  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:06:17.916366  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42309
	I0819 19:06:17.916867  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:06:17.917444  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:06:17.917473  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:06:17.917848  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:06:17.918070  452010 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:06:17.919771  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:06:17.920045  452010 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:06:17.920066  452010 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:06:17.920087  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:06:17.923139  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:06:17.923619  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:06:17.923650  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:06:17.923833  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:06:17.924044  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:06:17.924212  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:06:17.924373  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:06:18.071538  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 19:06:18.076040  452010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:06:18.089047  452010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:06:18.600380  452010 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 19:06:18.842364  452010 main.go:141] libmachine: Making call to close driver server
	I0819 19:06:18.842392  452010 main.go:141] libmachine: (ha-163902) Calling .Close
	I0819 19:06:18.842435  452010 main.go:141] libmachine: Making call to close driver server
	I0819 19:06:18.842454  452010 main.go:141] libmachine: (ha-163902) Calling .Close
	I0819 19:06:18.842721  452010 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:06:18.842741  452010 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:06:18.842758  452010 main.go:141] libmachine: Making call to close driver server
	I0819 19:06:18.842768  452010 main.go:141] libmachine: (ha-163902) Calling .Close
	I0819 19:06:18.842825  452010 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:06:18.842844  452010 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:06:18.842853  452010 main.go:141] libmachine: Making call to close driver server
	I0819 19:06:18.842861  452010 main.go:141] libmachine: (ha-163902) Calling .Close
	I0819 19:06:18.842829  452010 main.go:141] libmachine: (ha-163902) DBG | Closing plugin on server side
	I0819 19:06:18.842992  452010 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:06:18.843008  452010 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:06:18.843066  452010 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 19:06:18.843083  452010 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 19:06:18.843176  452010 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0819 19:06:18.843183  452010 round_trippers.go:469] Request Headers:
	I0819 19:06:18.843194  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:06:18.843199  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:06:18.843453  452010 main.go:141] libmachine: (ha-163902) DBG | Closing plugin on server side
	I0819 19:06:18.843492  452010 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:06:18.843507  452010 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:06:18.855500  452010 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0819 19:06:18.856225  452010 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0819 19:06:18.856244  452010 round_trippers.go:469] Request Headers:
	I0819 19:06:18.856255  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:06:18.856264  452010 round_trippers.go:473]     Content-Type: application/json
	I0819 19:06:18.856268  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:06:18.859394  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:06:18.859620  452010 main.go:141] libmachine: Making call to close driver server
	I0819 19:06:18.859638  452010 main.go:141] libmachine: (ha-163902) Calling .Close
	I0819 19:06:18.859971  452010 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:06:18.859990  452010 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:06:18.862354  452010 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0819 19:06:18.863459  452010 addons.go:510] duration metric: took 1.011661335s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0819 19:06:18.863502  452010 start.go:246] waiting for cluster config update ...
	I0819 19:06:18.863519  452010 start.go:255] writing updated cluster config ...
	I0819 19:06:18.865072  452010 out.go:201] 
	I0819 19:06:18.866489  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:06:18.866562  452010 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:06:18.868097  452010 out.go:177] * Starting "ha-163902-m02" control-plane node in "ha-163902" cluster
	I0819 19:06:18.869172  452010 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:06:18.869199  452010 cache.go:56] Caching tarball of preloaded images
	I0819 19:06:18.869292  452010 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:06:18.869303  452010 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 19:06:18.869365  452010 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:06:18.869546  452010 start.go:360] acquireMachinesLock for ha-163902-m02: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:06:18.869588  452010 start.go:364] duration metric: took 22.151µs to acquireMachinesLock for "ha-163902-m02"
	I0819 19:06:18.869607  452010 start.go:93] Provisioning new machine with config: &{Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:06:18.869680  452010 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0819 19:06:18.871112  452010 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 19:06:18.871199  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:06:18.871224  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:06:18.886168  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42023
	I0819 19:06:18.886614  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:06:18.887144  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:06:18.887167  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:06:18.887473  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:06:18.887703  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetMachineName
	I0819 19:06:18.887860  452010 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:06:18.888028  452010 start.go:159] libmachine.API.Create for "ha-163902" (driver="kvm2")
	I0819 19:06:18.888052  452010 client.go:168] LocalClient.Create starting
	I0819 19:06:18.888094  452010 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem
	I0819 19:06:18.888140  452010 main.go:141] libmachine: Decoding PEM data...
	I0819 19:06:18.888162  452010 main.go:141] libmachine: Parsing certificate...
	I0819 19:06:18.888231  452010 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem
	I0819 19:06:18.888260  452010 main.go:141] libmachine: Decoding PEM data...
	I0819 19:06:18.888276  452010 main.go:141] libmachine: Parsing certificate...
	I0819 19:06:18.888302  452010 main.go:141] libmachine: Running pre-create checks...
	I0819 19:06:18.888313  452010 main.go:141] libmachine: (ha-163902-m02) Calling .PreCreateCheck
	I0819 19:06:18.888470  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetConfigRaw
	I0819 19:06:18.888848  452010 main.go:141] libmachine: Creating machine...
	I0819 19:06:18.888862  452010 main.go:141] libmachine: (ha-163902-m02) Calling .Create
	I0819 19:06:18.889007  452010 main.go:141] libmachine: (ha-163902-m02) Creating KVM machine...
	I0819 19:06:18.890208  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found existing default KVM network
	I0819 19:06:18.890322  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found existing private KVM network mk-ha-163902
	I0819 19:06:18.890448  452010 main.go:141] libmachine: (ha-163902-m02) Setting up store path in /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02 ...
	I0819 19:06:18.890478  452010 main.go:141] libmachine: (ha-163902-m02) Building disk image from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 19:06:18.890498  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:18.890425  452374 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:06:18.890610  452010 main.go:141] libmachine: (ha-163902-m02) Downloading /home/jenkins/minikube-integration/19423-430949/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 19:06:19.144676  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:19.144527  452374 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa...
	I0819 19:06:19.231508  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:19.231334  452374 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/ha-163902-m02.rawdisk...
	I0819 19:06:19.231542  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Writing magic tar header
	I0819 19:06:19.231553  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Writing SSH key tar header
	I0819 19:06:19.231563  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:19.231455  452374 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02 ...
	I0819 19:06:19.231578  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02
	I0819 19:06:19.231617  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines
	I0819 19:06:19.231630  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:06:19.231648  452010 main.go:141] libmachine: (ha-163902-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02 (perms=drwx------)
	I0819 19:06:19.231661  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949
	I0819 19:06:19.231675  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 19:06:19.231687  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Checking permissions on dir: /home/jenkins
	I0819 19:06:19.231697  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Checking permissions on dir: /home
	I0819 19:06:19.231717  452010 main.go:141] libmachine: (ha-163902-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines (perms=drwxr-xr-x)
	I0819 19:06:19.231731  452010 main.go:141] libmachine: (ha-163902-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube (perms=drwxr-xr-x)
	I0819 19:06:19.231742  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Skipping /home - not owner
	I0819 19:06:19.231762  452010 main.go:141] libmachine: (ha-163902-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949 (perms=drwxrwxr-x)
	I0819 19:06:19.231779  452010 main.go:141] libmachine: (ha-163902-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 19:06:19.231798  452010 main.go:141] libmachine: (ha-163902-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 19:06:19.231809  452010 main.go:141] libmachine: (ha-163902-m02) Creating domain...
	I0819 19:06:19.232755  452010 main.go:141] libmachine: (ha-163902-m02) define libvirt domain using xml: 
	I0819 19:06:19.232780  452010 main.go:141] libmachine: (ha-163902-m02) <domain type='kvm'>
	I0819 19:06:19.232790  452010 main.go:141] libmachine: (ha-163902-m02)   <name>ha-163902-m02</name>
	I0819 19:06:19.232801  452010 main.go:141] libmachine: (ha-163902-m02)   <memory unit='MiB'>2200</memory>
	I0819 19:06:19.232809  452010 main.go:141] libmachine: (ha-163902-m02)   <vcpu>2</vcpu>
	I0819 19:06:19.232815  452010 main.go:141] libmachine: (ha-163902-m02)   <features>
	I0819 19:06:19.232824  452010 main.go:141] libmachine: (ha-163902-m02)     <acpi/>
	I0819 19:06:19.232831  452010 main.go:141] libmachine: (ha-163902-m02)     <apic/>
	I0819 19:06:19.232839  452010 main.go:141] libmachine: (ha-163902-m02)     <pae/>
	I0819 19:06:19.232864  452010 main.go:141] libmachine: (ha-163902-m02)     
	I0819 19:06:19.232904  452010 main.go:141] libmachine: (ha-163902-m02)   </features>
	I0819 19:06:19.232933  452010 main.go:141] libmachine: (ha-163902-m02)   <cpu mode='host-passthrough'>
	I0819 19:06:19.232961  452010 main.go:141] libmachine: (ha-163902-m02)   
	I0819 19:06:19.232980  452010 main.go:141] libmachine: (ha-163902-m02)   </cpu>
	I0819 19:06:19.232997  452010 main.go:141] libmachine: (ha-163902-m02)   <os>
	I0819 19:06:19.233014  452010 main.go:141] libmachine: (ha-163902-m02)     <type>hvm</type>
	I0819 19:06:19.233027  452010 main.go:141] libmachine: (ha-163902-m02)     <boot dev='cdrom'/>
	I0819 19:06:19.233037  452010 main.go:141] libmachine: (ha-163902-m02)     <boot dev='hd'/>
	I0819 19:06:19.233049  452010 main.go:141] libmachine: (ha-163902-m02)     <bootmenu enable='no'/>
	I0819 19:06:19.233058  452010 main.go:141] libmachine: (ha-163902-m02)   </os>
	I0819 19:06:19.233065  452010 main.go:141] libmachine: (ha-163902-m02)   <devices>
	I0819 19:06:19.233074  452010 main.go:141] libmachine: (ha-163902-m02)     <disk type='file' device='cdrom'>
	I0819 19:06:19.233092  452010 main.go:141] libmachine: (ha-163902-m02)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/boot2docker.iso'/>
	I0819 19:06:19.233104  452010 main.go:141] libmachine: (ha-163902-m02)       <target dev='hdc' bus='scsi'/>
	I0819 19:06:19.233112  452010 main.go:141] libmachine: (ha-163902-m02)       <readonly/>
	I0819 19:06:19.233124  452010 main.go:141] libmachine: (ha-163902-m02)     </disk>
	I0819 19:06:19.233153  452010 main.go:141] libmachine: (ha-163902-m02)     <disk type='file' device='disk'>
	I0819 19:06:19.233167  452010 main.go:141] libmachine: (ha-163902-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 19:06:19.233184  452010 main.go:141] libmachine: (ha-163902-m02)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/ha-163902-m02.rawdisk'/>
	I0819 19:06:19.233195  452010 main.go:141] libmachine: (ha-163902-m02)       <target dev='hda' bus='virtio'/>
	I0819 19:06:19.233206  452010 main.go:141] libmachine: (ha-163902-m02)     </disk>
	I0819 19:06:19.233215  452010 main.go:141] libmachine: (ha-163902-m02)     <interface type='network'>
	I0819 19:06:19.233228  452010 main.go:141] libmachine: (ha-163902-m02)       <source network='mk-ha-163902'/>
	I0819 19:06:19.233243  452010 main.go:141] libmachine: (ha-163902-m02)       <model type='virtio'/>
	I0819 19:06:19.233255  452010 main.go:141] libmachine: (ha-163902-m02)     </interface>
	I0819 19:06:19.233266  452010 main.go:141] libmachine: (ha-163902-m02)     <interface type='network'>
	I0819 19:06:19.233279  452010 main.go:141] libmachine: (ha-163902-m02)       <source network='default'/>
	I0819 19:06:19.233295  452010 main.go:141] libmachine: (ha-163902-m02)       <model type='virtio'/>
	I0819 19:06:19.233306  452010 main.go:141] libmachine: (ha-163902-m02)     </interface>
	I0819 19:06:19.233313  452010 main.go:141] libmachine: (ha-163902-m02)     <serial type='pty'>
	I0819 19:06:19.233326  452010 main.go:141] libmachine: (ha-163902-m02)       <target port='0'/>
	I0819 19:06:19.233337  452010 main.go:141] libmachine: (ha-163902-m02)     </serial>
	I0819 19:06:19.233348  452010 main.go:141] libmachine: (ha-163902-m02)     <console type='pty'>
	I0819 19:06:19.233364  452010 main.go:141] libmachine: (ha-163902-m02)       <target type='serial' port='0'/>
	I0819 19:06:19.233376  452010 main.go:141] libmachine: (ha-163902-m02)     </console>
	I0819 19:06:19.233387  452010 main.go:141] libmachine: (ha-163902-m02)     <rng model='virtio'>
	I0819 19:06:19.233403  452010 main.go:141] libmachine: (ha-163902-m02)       <backend model='random'>/dev/random</backend>
	I0819 19:06:19.233413  452010 main.go:141] libmachine: (ha-163902-m02)     </rng>
	I0819 19:06:19.233430  452010 main.go:141] libmachine: (ha-163902-m02)     
	I0819 19:06:19.233451  452010 main.go:141] libmachine: (ha-163902-m02)     
	I0819 19:06:19.233463  452010 main.go:141] libmachine: (ha-163902-m02)   </devices>
	I0819 19:06:19.233470  452010 main.go:141] libmachine: (ha-163902-m02) </domain>
	I0819 19:06:19.233482  452010 main.go:141] libmachine: (ha-163902-m02) 
	I0819 19:06:19.240231  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:9f:ed:ae in network default
	I0819 19:06:19.240868  452010 main.go:141] libmachine: (ha-163902-m02) Ensuring networks are active...
	I0819 19:06:19.240891  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:19.241837  452010 main.go:141] libmachine: (ha-163902-m02) Ensuring network default is active
	I0819 19:06:19.242186  452010 main.go:141] libmachine: (ha-163902-m02) Ensuring network mk-ha-163902 is active
	I0819 19:06:19.242500  452010 main.go:141] libmachine: (ha-163902-m02) Getting domain xml...
	I0819 19:06:19.243337  452010 main.go:141] libmachine: (ha-163902-m02) Creating domain...
	I0819 19:06:20.479495  452010 main.go:141] libmachine: (ha-163902-m02) Waiting to get IP...
	I0819 19:06:20.480261  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:20.480701  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:20.480744  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:20.480681  452374 retry.go:31] will retry after 209.264831ms: waiting for machine to come up
	I0819 19:06:20.691235  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:20.691678  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:20.691713  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:20.691621  452374 retry.go:31] will retry after 241.772157ms: waiting for machine to come up
	I0819 19:06:20.935152  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:20.935570  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:20.935591  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:20.935531  452374 retry.go:31] will retry after 360.106793ms: waiting for machine to come up
	I0819 19:06:21.297067  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:21.297619  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:21.297645  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:21.297574  452374 retry.go:31] will retry after 403.561399ms: waiting for machine to come up
	I0819 19:06:21.703174  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:21.703612  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:21.703644  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:21.703562  452374 retry.go:31] will retry after 752.964877ms: waiting for machine to come up
	I0819 19:06:22.458803  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:22.459336  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:22.459367  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:22.459273  452374 retry.go:31] will retry after 637.744367ms: waiting for machine to come up
	I0819 19:06:23.099345  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:23.099815  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:23.099840  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:23.099710  452374 retry.go:31] will retry after 1.154976518s: waiting for machine to come up
	I0819 19:06:24.256860  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:24.257443  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:24.257476  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:24.257385  452374 retry.go:31] will retry after 1.031712046s: waiting for machine to come up
	I0819 19:06:25.290650  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:25.291159  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:25.291188  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:25.291098  452374 retry.go:31] will retry after 1.272784033s: waiting for machine to come up
	I0819 19:06:26.565596  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:26.566129  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:26.566157  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:26.566062  452374 retry.go:31] will retry after 1.65255646s: waiting for machine to come up
	I0819 19:06:28.220964  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:28.221448  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:28.221498  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:28.221428  452374 retry.go:31] will retry after 2.031618852s: waiting for machine to come up
	I0819 19:06:30.254961  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:30.255400  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:30.255434  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:30.255356  452374 retry.go:31] will retry after 3.580532641s: waiting for machine to come up
	I0819 19:06:33.838198  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:33.838578  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:33.838619  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:33.838545  452374 retry.go:31] will retry after 3.563790311s: waiting for machine to come up
	I0819 19:06:37.404569  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:37.405172  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find current IP address of domain ha-163902-m02 in network mk-ha-163902
	I0819 19:06:37.405205  452010 main.go:141] libmachine: (ha-163902-m02) DBG | I0819 19:06:37.405082  452374 retry.go:31] will retry after 5.402566654s: waiting for machine to come up
	I0819 19:06:42.810280  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:42.810723  452010 main.go:141] libmachine: (ha-163902-m02) Found IP for machine: 192.168.39.162
	I0819 19:06:42.810745  452010 main.go:141] libmachine: (ha-163902-m02) Reserving static IP address...
	I0819 19:06:42.810771  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has current primary IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:42.811159  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find host DHCP lease matching {name: "ha-163902-m02", mac: "52:54:00:92:f5:c9", ip: "192.168.39.162"} in network mk-ha-163902
	I0819 19:06:42.895009  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Getting to WaitForSSH function...
	I0819 19:06:42.895034  452010 main.go:141] libmachine: (ha-163902-m02) Reserved static IP address: 192.168.39.162
	I0819 19:06:42.895047  452010 main.go:141] libmachine: (ha-163902-m02) Waiting for SSH to be available...
	I0819 19:06:42.897729  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:42.898129  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902
	I0819 19:06:42.898158  452010 main.go:141] libmachine: (ha-163902-m02) DBG | unable to find defined IP address of network mk-ha-163902 interface with MAC address 52:54:00:92:f5:c9
	I0819 19:06:42.898365  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Using SSH client type: external
	I0819 19:06:42.898392  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa (-rw-------)
	I0819 19:06:42.898430  452010 main.go:141] libmachine: (ha-163902-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:06:42.898444  452010 main.go:141] libmachine: (ha-163902-m02) DBG | About to run SSH command:
	I0819 19:06:42.898459  452010 main.go:141] libmachine: (ha-163902-m02) DBG | exit 0
	I0819 19:06:42.902075  452010 main.go:141] libmachine: (ha-163902-m02) DBG | SSH cmd err, output: exit status 255: 
	I0819 19:06:42.902099  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0819 19:06:42.902110  452010 main.go:141] libmachine: (ha-163902-m02) DBG | command : exit 0
	I0819 19:06:42.902117  452010 main.go:141] libmachine: (ha-163902-m02) DBG | err     : exit status 255
	I0819 19:06:42.902128  452010 main.go:141] libmachine: (ha-163902-m02) DBG | output  : 
	I0819 19:06:45.903578  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Getting to WaitForSSH function...
	I0819 19:06:45.906100  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:45.906543  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:45.906582  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:45.906718  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Using SSH client type: external
	I0819 19:06:45.906742  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa (-rw-------)
	I0819 19:06:45.906763  452010 main.go:141] libmachine: (ha-163902-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:06:45.906775  452010 main.go:141] libmachine: (ha-163902-m02) DBG | About to run SSH command:
	I0819 19:06:45.906790  452010 main.go:141] libmachine: (ha-163902-m02) DBG | exit 0
	I0819 19:06:46.029270  452010 main.go:141] libmachine: (ha-163902-m02) DBG | SSH cmd err, output: <nil>: 
	I0819 19:06:46.029593  452010 main.go:141] libmachine: (ha-163902-m02) KVM machine creation complete!
	I0819 19:06:46.029973  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetConfigRaw
	I0819 19:06:46.030719  452010 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:06:46.030971  452010 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:06:46.031146  452010 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 19:06:46.031164  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetState
	I0819 19:06:46.032447  452010 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 19:06:46.032467  452010 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 19:06:46.032478  452010 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 19:06:46.032487  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:46.034805  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.035732  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:46.035761  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.036454  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:46.036713  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.036919  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.037113  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:46.037330  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:06:46.037572  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0819 19:06:46.037584  452010 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 19:06:46.140509  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:06:46.140546  452010 main.go:141] libmachine: Detecting the provisioner...
	I0819 19:06:46.140558  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:46.143288  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.143565  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:46.143592  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.143717  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:46.143925  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.144103  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.144235  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:46.144447  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:06:46.144671  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0819 19:06:46.144687  452010 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 19:06:46.249738  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 19:06:46.249836  452010 main.go:141] libmachine: found compatible host: buildroot
	I0819 19:06:46.249850  452010 main.go:141] libmachine: Provisioning with buildroot...
	I0819 19:06:46.249865  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetMachineName
	I0819 19:06:46.250222  452010 buildroot.go:166] provisioning hostname "ha-163902-m02"
	I0819 19:06:46.250255  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetMachineName
	I0819 19:06:46.250482  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:46.253090  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.253476  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:46.253505  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.253681  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:46.253889  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.254078  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.254216  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:46.254408  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:06:46.254582  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0819 19:06:46.254595  452010 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-163902-m02 && echo "ha-163902-m02" | sudo tee /etc/hostname
	I0819 19:06:46.371136  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-163902-m02
	
	I0819 19:06:46.371175  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:46.374542  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.374922  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:46.374971  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.375211  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:46.375472  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.375704  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.375854  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:46.376074  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:06:46.376314  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0819 19:06:46.376340  452010 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-163902-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-163902-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-163902-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:06:46.490910  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:06:46.490950  452010 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 19:06:46.490969  452010 buildroot.go:174] setting up certificates
	I0819 19:06:46.490981  452010 provision.go:84] configureAuth start
	I0819 19:06:46.490991  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetMachineName
	I0819 19:06:46.491351  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetIP
	I0819 19:06:46.494171  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.494505  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:46.494534  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.494726  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:46.497255  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.497624  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:46.497657  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.497763  452010 provision.go:143] copyHostCerts
	I0819 19:06:46.497804  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:06:46.497855  452010 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 19:06:46.497868  452010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:06:46.497941  452010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 19:06:46.498019  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:06:46.498037  452010 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 19:06:46.498041  452010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:06:46.498065  452010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 19:06:46.498114  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:06:46.498131  452010 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 19:06:46.498137  452010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:06:46.498158  452010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 19:06:46.498205  452010 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.ha-163902-m02 san=[127.0.0.1 192.168.39.162 ha-163902-m02 localhost minikube]
	I0819 19:06:46.688166  452010 provision.go:177] copyRemoteCerts
	I0819 19:06:46.688231  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:06:46.688256  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:46.690890  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.691349  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:46.691376  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.691618  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:46.691848  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.692029  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:46.692134  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa Username:docker}
	I0819 19:06:46.775113  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 19:06:46.775201  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 19:06:46.800562  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 19:06:46.800648  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 19:06:46.825181  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 19:06:46.825261  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:06:46.850206  452010 provision.go:87] duration metric: took 359.20931ms to configureAuth
	I0819 19:06:46.850246  452010 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:06:46.850434  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:06:46.850526  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:46.853294  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.853661  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:46.853696  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:46.853864  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:46.854072  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.854237  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:46.854388  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:46.854571  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:06:46.854803  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0819 19:06:46.854824  452010 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:06:47.115715  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:06:47.115742  452010 main.go:141] libmachine: Checking connection to Docker...
	I0819 19:06:47.115753  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetURL
	I0819 19:06:47.117088  452010 main.go:141] libmachine: (ha-163902-m02) DBG | Using libvirt version 6000000
	I0819 19:06:47.119409  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.119731  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:47.119762  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.119948  452010 main.go:141] libmachine: Docker is up and running!
	I0819 19:06:47.119963  452010 main.go:141] libmachine: Reticulating splines...
	I0819 19:06:47.119970  452010 client.go:171] duration metric: took 28.231904734s to LocalClient.Create
	I0819 19:06:47.120005  452010 start.go:167] duration metric: took 28.231975893s to libmachine.API.Create "ha-163902"
	I0819 19:06:47.120017  452010 start.go:293] postStartSetup for "ha-163902-m02" (driver="kvm2")
	I0819 19:06:47.120028  452010 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:06:47.120046  452010 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:06:47.120329  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:06:47.120356  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:47.122945  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.123340  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:47.123368  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.123533  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:47.123760  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:47.123954  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:47.124115  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa Username:docker}
	I0819 19:06:47.207593  452010 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:06:47.212130  452010 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:06:47.212169  452010 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 19:06:47.212264  452010 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 19:06:47.212346  452010 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 19:06:47.212359  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /etc/ssl/certs/4381592.pem
	I0819 19:06:47.212449  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:06:47.222079  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:06:47.247184  452010 start.go:296] duration metric: took 127.149883ms for postStartSetup
	I0819 19:06:47.247249  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetConfigRaw
	I0819 19:06:47.247998  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetIP
	I0819 19:06:47.250571  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.250897  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:47.250929  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.251160  452010 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:06:47.251388  452010 start.go:128] duration metric: took 28.381695209s to createHost
	I0819 19:06:47.251418  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:47.253641  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.254012  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:47.254050  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.254245  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:47.254461  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:47.254626  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:47.254783  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:47.254985  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:06:47.255157  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0819 19:06:47.255167  452010 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:06:47.361918  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094407.343319532
	
	I0819 19:06:47.361947  452010 fix.go:216] guest clock: 1724094407.343319532
	I0819 19:06:47.361954  452010 fix.go:229] Guest: 2024-08-19 19:06:47.343319532 +0000 UTC Remote: 2024-08-19 19:06:47.251402615 +0000 UTC m=+75.870319340 (delta=91.916917ms)
	I0819 19:06:47.361971  452010 fix.go:200] guest clock delta is within tolerance: 91.916917ms
	I0819 19:06:47.361977  452010 start.go:83] releasing machines lock for "ha-163902-m02", held for 28.492379147s
	I0819 19:06:47.362002  452010 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:06:47.362323  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetIP
	I0819 19:06:47.364733  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.365073  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:47.365103  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.367658  452010 out.go:177] * Found network options:
	I0819 19:06:47.369187  452010 out.go:177]   - NO_PROXY=192.168.39.227
	W0819 19:06:47.370455  452010 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 19:06:47.370493  452010 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:06:47.371252  452010 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:06:47.371472  452010 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:06:47.371583  452010 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:06:47.371625  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	W0819 19:06:47.371642  452010 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 19:06:47.371725  452010 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:06:47.371748  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:06:47.374201  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.374392  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.374576  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:47.374602  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.374737  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:47.374743  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:47.374766  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:47.374915  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:06:47.374972  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:47.375146  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:47.375174  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:06:47.375347  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:06:47.375362  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa Username:docker}
	I0819 19:06:47.375504  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa Username:docker}
	I0819 19:06:47.607502  452010 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:06:47.613186  452010 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:06:47.613275  452010 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:06:47.628915  452010 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:06:47.628941  452010 start.go:495] detecting cgroup driver to use...
	I0819 19:06:47.629004  452010 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:06:47.647161  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:06:47.661236  452010 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:06:47.661311  452010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:06:47.675255  452010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:06:47.690214  452010 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:06:47.802867  452010 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:06:47.950361  452010 docker.go:233] disabling docker service ...
	I0819 19:06:47.950457  452010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:06:47.964737  452010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:06:47.977713  452010 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:06:48.122143  452010 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:06:48.256398  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:06:48.270509  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:06:48.289209  452010 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:06:48.289278  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:06:48.300162  452010 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:06:48.300241  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:06:48.311567  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:06:48.322706  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:06:48.334109  452010 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:06:48.345265  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:06:48.355818  452010 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:06:48.373250  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:06:48.384349  452010 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:06:48.394369  452010 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:06:48.394450  452010 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:06:48.408323  452010 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:06:48.418407  452010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:06:48.548481  452010 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:06:48.690311  452010 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:06:48.690400  452010 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:06:48.695504  452010 start.go:563] Will wait 60s for crictl version
	I0819 19:06:48.695586  452010 ssh_runner.go:195] Run: which crictl
	I0819 19:06:48.699351  452010 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:06:48.736307  452010 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:06:48.736409  452010 ssh_runner.go:195] Run: crio --version
	I0819 19:06:48.763687  452010 ssh_runner.go:195] Run: crio --version
	I0819 19:06:48.793843  452010 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:06:48.795348  452010 out.go:177]   - env NO_PROXY=192.168.39.227
	I0819 19:06:48.796862  452010 main.go:141] libmachine: (ha-163902-m02) Calling .GetIP
	I0819 19:06:48.799508  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:48.799972  452010 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:06:33 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:06:48.800004  452010 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:06:48.800231  452010 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 19:06:48.804407  452010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:06:48.816996  452010 mustload.go:65] Loading cluster: ha-163902
	I0819 19:06:48.817328  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:06:48.817633  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:06:48.817667  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:06:48.832766  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34887
	I0819 19:06:48.833271  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:06:48.833821  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:06:48.833842  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:06:48.834204  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:06:48.834427  452010 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:06:48.836015  452010 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:06:48.836403  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:06:48.836438  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:06:48.852134  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41377
	I0819 19:06:48.852620  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:06:48.853069  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:06:48.853086  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:06:48.853453  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:06:48.853685  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:06:48.853871  452010 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902 for IP: 192.168.39.162
	I0819 19:06:48.853883  452010 certs.go:194] generating shared ca certs ...
	I0819 19:06:48.853915  452010 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:48.854117  452010 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 19:06:48.854176  452010 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 19:06:48.854190  452010 certs.go:256] generating profile certs ...
	I0819 19:06:48.854287  452010 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.key
	I0819 19:06:48.854321  452010 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.4731d0f4
	I0819 19:06:48.854347  452010 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.4731d0f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227 192.168.39.162 192.168.39.254]
	I0819 19:06:48.963236  452010 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.4731d0f4 ...
	I0819 19:06:48.963267  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.4731d0f4: {Name:mkc59270b5f28bfe677695dfd975da72759a5572 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:48.963460  452010 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.4731d0f4 ...
	I0819 19:06:48.963476  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.4731d0f4: {Name:mkd7343b2ea6812d10a2f5d6ca9281b67dd3ee9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:06:48.963569  452010 certs.go:381] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.4731d0f4 -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt
	I0819 19:06:48.963742  452010 certs.go:385] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.4731d0f4 -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key
	I0819 19:06:48.963922  452010 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key
	I0819 19:06:48.963942  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 19:06:48.963962  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 19:06:48.963981  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 19:06:48.963999  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 19:06:48.964019  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 19:06:48.964033  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 19:06:48.964051  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 19:06:48.964068  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 19:06:48.964139  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 19:06:48.964180  452010 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 19:06:48.964194  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 19:06:48.964234  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:06:48.964266  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:06:48.964294  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 19:06:48.964347  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:06:48.964387  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem -> /usr/share/ca-certificates/438159.pem
	I0819 19:06:48.964410  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /usr/share/ca-certificates/4381592.pem
	I0819 19:06:48.964427  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:06:48.964486  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:06:48.967352  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:06:48.967855  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:06:48.967878  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:06:48.968056  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:06:48.968287  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:06:48.968440  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:06:48.968605  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:06:49.041602  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 19:06:49.046052  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 19:06:49.062113  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 19:06:49.066289  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0819 19:06:49.077555  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 19:06:49.081754  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 19:06:49.092067  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 19:06:49.096287  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 19:06:49.106703  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 19:06:49.110861  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 19:06:49.122396  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 19:06:49.126491  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 19:06:49.137756  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:06:49.162262  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:06:49.186698  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:06:49.211042  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 19:06:49.235218  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 19:06:49.259107  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:06:49.283065  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:06:49.306565  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:06:49.331781  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 19:06:49.356156  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 19:06:49.381191  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:06:49.406139  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 19:06:49.423202  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0819 19:06:49.440609  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 19:06:49.457592  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 19:06:49.474446  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 19:06:49.492390  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 19:06:49.508948  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 19:06:49.525867  452010 ssh_runner.go:195] Run: openssl version
	I0819 19:06:49.531624  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 19:06:49.542714  452010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 19:06:49.547377  452010 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 19:06:49.547459  452010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 19:06:49.553205  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 19:06:49.564319  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 19:06:49.575725  452010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 19:06:49.580233  452010 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 19:06:49.580294  452010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 19:06:49.586702  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:06:49.598456  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:06:49.610315  452010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:06:49.614956  452010 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:06:49.615051  452010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:06:49.620779  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:06:49.631576  452010 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:06:49.636283  452010 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 19:06:49.636350  452010 kubeadm.go:934] updating node {m02 192.168.39.162 8443 v1.31.0 crio true true} ...
	I0819 19:06:49.636461  452010 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-163902-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:06:49.636488  452010 kube-vip.go:115] generating kube-vip config ...
	I0819 19:06:49.636529  452010 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 19:06:49.654189  452010 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 19:06:49.654277  452010 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 19:06:49.654357  452010 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:06:49.666661  452010 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 19:06:49.666736  452010 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 19:06:49.679260  452010 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 19:06:49.679297  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 19:06:49.679372  452010 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0819 19:06:49.679396  452010 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0819 19:06:49.679378  452010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 19:06:49.684043  452010 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 19:06:49.684086  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 19:06:50.618723  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 19:06:50.618830  452010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 19:06:50.623803  452010 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 19:06:50.623852  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 19:06:50.680835  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:06:50.716285  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 19:06:50.716393  452010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 19:06:50.724629  452010 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 19:06:50.724676  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 19:06:51.161810  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 19:06:51.171528  452010 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 19:06:51.188306  452010 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:06:51.205300  452010 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 19:06:51.221816  452010 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 19:06:51.225777  452010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:06:51.238152  452010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:06:51.357742  452010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:06:51.375585  452010 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:06:51.375948  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:06:51.375996  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:06:51.391834  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43635
	I0819 19:06:51.392385  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:06:51.392946  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:06:51.392977  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:06:51.393420  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:06:51.393636  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:06:51.393843  452010 start.go:317] joinCluster: &{Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:06:51.393973  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 19:06:51.393989  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:06:51.397091  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:06:51.397629  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:06:51.397667  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:06:51.397820  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:06:51.398038  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:06:51.398241  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:06:51.398386  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:06:51.536333  452010 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:06:51.536402  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token am366n.it73me2b53s38qnq --discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-163902-m02 --control-plane --apiserver-advertise-address=192.168.39.162 --apiserver-bind-port=8443"
	I0819 19:07:13.043284  452010 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token am366n.it73me2b53s38qnq --discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-163902-m02 --control-plane --apiserver-advertise-address=192.168.39.162 --apiserver-bind-port=8443": (21.506847863s)
	I0819 19:07:13.043332  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 19:07:13.565976  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-163902-m02 minikube.k8s.io/updated_at=2024_08_19T19_07_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8 minikube.k8s.io/name=ha-163902 minikube.k8s.io/primary=false
	I0819 19:07:13.664942  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-163902-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 19:07:13.776384  452010 start.go:319] duration metric: took 22.382536414s to joinCluster
	I0819 19:07:13.776496  452010 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:07:13.776816  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:07:13.778036  452010 out.go:177] * Verifying Kubernetes components...
	I0819 19:07:13.779490  452010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:07:14.029940  452010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:07:14.075916  452010 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:07:14.076250  452010 kapi.go:59] client config for ha-163902: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.key", CAFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 19:07:14.076336  452010 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.227:8443
	I0819 19:07:14.076646  452010 node_ready.go:35] waiting up to 6m0s for node "ha-163902-m02" to be "Ready" ...
	I0819 19:07:14.076762  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:14.076774  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:14.076786  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:14.076793  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:14.096746  452010 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0819 19:07:14.577779  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:14.577815  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:14.577828  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:14.577834  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:14.586954  452010 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0819 19:07:15.076937  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:15.076970  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:15.076982  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:15.076988  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:15.083444  452010 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 19:07:15.577281  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:15.577306  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:15.577314  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:15.577319  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:15.580559  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:16.077342  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:16.077367  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:16.077376  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:16.077380  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:16.080563  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:16.081250  452010 node_ready.go:53] node "ha-163902-m02" has status "Ready":"False"
	I0819 19:07:16.577752  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:16.577776  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:16.577784  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:16.577790  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:16.581579  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:17.077503  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:17.077529  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:17.077538  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:17.077542  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:17.080905  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:17.577841  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:17.577883  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:17.577892  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:17.577896  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:17.581363  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:18.077999  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:18.078032  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:18.078042  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:18.078047  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:18.081372  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:18.082174  452010 node_ready.go:53] node "ha-163902-m02" has status "Ready":"False"
	I0819 19:07:18.577568  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:18.577592  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:18.577601  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:18.577604  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:18.580896  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:19.077445  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:19.077473  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:19.077482  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:19.077487  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:19.080726  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:19.577746  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:19.577772  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:19.577781  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:19.577785  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:19.582182  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:07:20.076987  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:20.077012  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:20.077019  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:20.077024  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:20.122989  452010 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I0819 19:07:20.123408  452010 node_ready.go:53] node "ha-163902-m02" has status "Ready":"False"
	I0819 19:07:20.577872  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:20.577899  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:20.577910  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:20.577915  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:20.581604  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:21.077636  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:21.077661  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:21.077669  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:21.077674  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:21.082054  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:07:21.577535  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:21.577560  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:21.577569  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:21.577574  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:21.581033  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:22.076938  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:22.076966  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:22.076974  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:22.076978  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:22.080744  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:22.577705  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:22.577730  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:22.577738  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:22.577743  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:22.581201  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:22.581773  452010 node_ready.go:53] node "ha-163902-m02" has status "Ready":"False"
	I0819 19:07:23.077034  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:23.077061  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:23.077070  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:23.077076  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:23.079992  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:07:23.577915  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:23.577944  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:23.577957  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:23.577964  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:23.581249  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:24.077818  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:24.077849  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:24.077860  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:24.077868  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:24.081464  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:24.577323  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:24.577349  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:24.577358  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:24.577362  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:24.580849  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:25.076926  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:25.076959  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:25.076971  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:25.076977  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:25.080348  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:25.080966  452010 node_ready.go:53] node "ha-163902-m02" has status "Ready":"False"
	I0819 19:07:25.577361  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:25.577386  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:25.577395  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:25.577400  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:25.580864  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:26.077798  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:26.077834  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:26.077845  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:26.077849  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:26.080955  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:26.577189  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:26.577217  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:26.577226  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:26.577231  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:26.580600  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:27.077563  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:27.077586  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:27.077595  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:27.077600  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:27.081458  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:27.082293  452010 node_ready.go:53] node "ha-163902-m02" has status "Ready":"False"
	I0819 19:07:27.577810  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:27.577835  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:27.577844  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:27.577847  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:27.581936  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:07:28.077195  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:28.077225  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:28.077238  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:28.077243  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:28.080523  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:28.577630  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:28.577656  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:28.577665  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:28.577669  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:28.580980  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:29.077582  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:29.077612  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:29.077620  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:29.077624  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:29.080971  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:29.576863  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:29.576892  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:29.576901  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:29.576905  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:29.580440  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:29.580954  452010 node_ready.go:53] node "ha-163902-m02" has status "Ready":"False"
	I0819 19:07:30.077368  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:30.077396  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:30.077404  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:30.077409  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:30.080700  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:30.577721  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:30.577747  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:30.577756  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:30.577760  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:30.581399  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:31.077219  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:31.077245  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:31.077253  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:31.077258  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:31.080712  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:31.577280  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:31.577308  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:31.577319  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:31.577325  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:31.580843  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:31.581534  452010 node_ready.go:53] node "ha-163902-m02" has status "Ready":"False"
	I0819 19:07:32.076972  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:32.077004  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.077015  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.077025  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.080781  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:32.081404  452010 node_ready.go:49] node "ha-163902-m02" has status "Ready":"True"
	I0819 19:07:32.081432  452010 node_ready.go:38] duration metric: took 18.004765365s for node "ha-163902-m02" to be "Ready" ...
	I0819 19:07:32.081445  452010 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:07:32.081544  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:07:32.081558  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.081568  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.081574  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.086806  452010 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 19:07:32.093034  452010 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nkths" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.093169  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-nkths
	I0819 19:07:32.093180  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.093187  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.093191  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.096615  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:32.097343  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:32.097359  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.097367  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.097370  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.100137  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:07:32.100755  452010 pod_ready.go:93] pod "coredns-6f6b679f8f-nkths" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:32.100775  452010 pod_ready.go:82] duration metric: took 7.709242ms for pod "coredns-6f6b679f8f-nkths" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.100785  452010 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-wmp8k" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.100846  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-wmp8k
	I0819 19:07:32.100854  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.100861  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.100866  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.103592  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:07:32.104256  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:32.104275  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.104282  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.104286  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.106843  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:07:32.107384  452010 pod_ready.go:93] pod "coredns-6f6b679f8f-wmp8k" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:32.107408  452010 pod_ready.go:82] duration metric: took 6.616047ms for pod "coredns-6f6b679f8f-wmp8k" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.107421  452010 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.107492  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-163902
	I0819 19:07:32.107502  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.107510  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.107517  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.110212  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:07:32.111449  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:32.111468  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.111479  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.111486  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.113751  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:07:32.114260  452010 pod_ready.go:93] pod "etcd-ha-163902" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:32.114280  452010 pod_ready.go:82] duration metric: took 6.851673ms for pod "etcd-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.114289  452010 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.114397  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-163902-m02
	I0819 19:07:32.114409  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.114416  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.114420  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.117190  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:07:32.117911  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:32.117929  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.117940  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.117946  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.120664  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:07:32.121157  452010 pod_ready.go:93] pod "etcd-ha-163902-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:32.121179  452010 pod_ready.go:82] duration metric: took 6.88181ms for pod "etcd-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.121198  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.277612  452010 request.go:632] Waited for 156.338168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902
	I0819 19:07:32.277674  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902
	I0819 19:07:32.277680  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.277688  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.277692  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.280988  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:32.478066  452010 request.go:632] Waited for 196.429632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:32.478153  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:32.478161  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.478176  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.478187  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.481514  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:32.482098  452010 pod_ready.go:93] pod "kube-apiserver-ha-163902" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:32.482121  452010 pod_ready.go:82] duration metric: took 360.912121ms for pod "kube-apiserver-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.482132  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.677074  452010 request.go:632] Waited for 194.863482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902-m02
	I0819 19:07:32.677193  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902-m02
	I0819 19:07:32.677205  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.677216  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.677226  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.680997  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:32.877001  452010 request.go:632] Waited for 195.332988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:32.877090  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:32.877095  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:32.877103  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:32.877107  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:32.880190  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:32.880934  452010 pod_ready.go:93] pod "kube-apiserver-ha-163902-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:32.880962  452010 pod_ready.go:82] duration metric: took 398.822495ms for pod "kube-apiserver-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:32.880976  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:33.077931  452010 request.go:632] Waited for 196.851229ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902
	I0819 19:07:33.077997  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902
	I0819 19:07:33.078002  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:33.078019  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:33.078025  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:33.082066  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:07:33.277433  452010 request.go:632] Waited for 194.507863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:33.277499  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:33.277505  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:33.277515  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:33.277521  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:33.281107  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:33.281792  452010 pod_ready.go:93] pod "kube-controller-manager-ha-163902" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:33.281813  452010 pod_ready.go:82] duration metric: took 400.829541ms for pod "kube-controller-manager-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:33.281824  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:33.477892  452010 request.go:632] Waited for 195.967397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902-m02
	I0819 19:07:33.477965  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902-m02
	I0819 19:07:33.477973  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:33.477984  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:33.477991  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:33.482174  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:07:33.677478  452010 request.go:632] Waited for 194.399986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:33.677576  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:33.677585  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:33.677598  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:33.677606  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:33.680890  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:33.681698  452010 pod_ready.go:93] pod "kube-controller-manager-ha-163902-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:33.681724  452010 pod_ready.go:82] duration metric: took 399.894379ms for pod "kube-controller-manager-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:33.681735  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4whvs" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:33.877888  452010 request.go:632] Waited for 196.072309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4whvs
	I0819 19:07:33.877968  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4whvs
	I0819 19:07:33.877974  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:33.877982  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:33.877986  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:33.881723  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:34.077843  452010 request.go:632] Waited for 195.46411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:34.077917  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:34.077923  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:34.077933  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:34.077945  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:34.081664  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:34.082198  452010 pod_ready.go:93] pod "kube-proxy-4whvs" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:34.082219  452010 pod_ready.go:82] duration metric: took 400.478539ms for pod "kube-proxy-4whvs" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:34.082229  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wxrsv" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:34.277406  452010 request.go:632] Waited for 195.097969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxrsv
	I0819 19:07:34.277471  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxrsv
	I0819 19:07:34.277476  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:34.277484  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:34.277488  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:34.280924  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:34.478028  452010 request.go:632] Waited for 196.395749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:34.478117  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:34.478128  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:34.478141  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:34.478147  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:34.481981  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:34.482567  452010 pod_ready.go:93] pod "kube-proxy-wxrsv" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:34.482589  452010 pod_ready.go:82] duration metric: took 400.353478ms for pod "kube-proxy-wxrsv" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:34.482598  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:34.677654  452010 request.go:632] Waited for 194.973644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902
	I0819 19:07:34.677743  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902
	I0819 19:07:34.677766  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:34.677795  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:34.677806  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:34.681127  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:34.877054  452010 request.go:632] Waited for 195.298137ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:34.877122  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:07:34.877127  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:34.877150  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:34.877154  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:34.880635  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:34.881396  452010 pod_ready.go:93] pod "kube-scheduler-ha-163902" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:34.881421  452010 pod_ready.go:82] duration metric: took 398.815157ms for pod "kube-scheduler-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:34.881434  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:35.077465  452010 request.go:632] Waited for 195.950631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902-m02
	I0819 19:07:35.077565  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902-m02
	I0819 19:07:35.077575  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:35.077583  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:35.077587  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:35.081189  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:35.277171  452010 request.go:632] Waited for 195.326146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:35.277249  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:07:35.277254  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:35.277262  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:35.277268  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:35.280625  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:35.281104  452010 pod_ready.go:93] pod "kube-scheduler-ha-163902-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 19:07:35.281151  452010 pod_ready.go:82] duration metric: took 399.707427ms for pod "kube-scheduler-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:07:35.281171  452010 pod_ready.go:39] duration metric: took 3.199703609s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:07:35.281196  452010 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:07:35.281258  452010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:07:35.297202  452010 api_server.go:72] duration metric: took 21.520658024s to wait for apiserver process to appear ...
	I0819 19:07:35.297237  452010 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:07:35.297264  452010 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0819 19:07:35.302264  452010 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0819 19:07:35.302357  452010 round_trippers.go:463] GET https://192.168.39.227:8443/version
	I0819 19:07:35.302365  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:35.302373  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:35.302377  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:35.303359  452010 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 19:07:35.303527  452010 api_server.go:141] control plane version: v1.31.0
	I0819 19:07:35.303549  452010 api_server.go:131] duration metric: took 6.303973ms to wait for apiserver health ...
	I0819 19:07:35.303559  452010 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:07:35.477939  452010 request.go:632] Waited for 174.284855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:07:35.478039  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:07:35.478057  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:35.478068  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:35.478081  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:35.487457  452010 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0819 19:07:35.491884  452010 system_pods.go:59] 17 kube-system pods found
	I0819 19:07:35.491929  452010 system_pods.go:61] "coredns-6f6b679f8f-nkths" [b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2] Running
	I0819 19:07:35.491937  452010 system_pods.go:61] "coredns-6f6b679f8f-wmp8k" [ca2ee9c4-992a-4251-a717-9843b7b41894] Running
	I0819 19:07:35.491943  452010 system_pods.go:61] "etcd-ha-163902" [fc3d22fd-d8b3-45ca-a6f6-9898285f85d4] Running
	I0819 19:07:35.491949  452010 system_pods.go:61] "etcd-ha-163902-m02" [f05c7c5e-de7b-4963-b897-204a95440e0d] Running
	I0819 19:07:35.491954  452010 system_pods.go:61] "kindnet-97cnn" [75284cfb-20b5-4675-b53e-db4130cc6722] Running
	I0819 19:07:35.491958  452010 system_pods.go:61] "kindnet-bpwjl" [624275c2-a670-4cc0-a11c-70f3e1b78946] Running
	I0819 19:07:35.491963  452010 system_pods.go:61] "kube-apiserver-ha-163902" [ab456c18-e3cf-462b-8028-55ddf9b36306] Running
	I0819 19:07:35.491968  452010 system_pods.go:61] "kube-apiserver-ha-163902-m02" [f11ec84f-77aa-4003-a786-f6c54b4c14bd] Running
	I0819 19:07:35.491974  452010 system_pods.go:61] "kube-controller-manager-ha-163902" [efeafbc7-7719-4632-9215-fd3c3ca09cc5] Running
	I0819 19:07:35.491980  452010 system_pods.go:61] "kube-controller-manager-ha-163902-m02" [2d80759a-8137-4e3c-a621-ba92532c9d9b] Running
	I0819 19:07:35.491985  452010 system_pods.go:61] "kube-proxy-4whvs" [b241f2e5-d83c-432b-bdce-c26940efa096] Running
	I0819 19:07:35.491990  452010 system_pods.go:61] "kube-proxy-wxrsv" [3d78c5e8-eed2-4da5-9425-76f96e2d8ed6] Running
	I0819 19:07:35.491998  452010 system_pods.go:61] "kube-scheduler-ha-163902" [ee9d8a96-56d9-4bc2-9829-bdf4a0ec0749] Running
	I0819 19:07:35.492004  452010 system_pods.go:61] "kube-scheduler-ha-163902-m02" [90a559e2-5c2a-4db8-b9d4-21362177d686] Running
	I0819 19:07:35.492010  452010 system_pods.go:61] "kube-vip-ha-163902" [3143571f-0eb1-4f9a-ada5-27fda4724d18] Running
	I0819 19:07:35.492014  452010 system_pods.go:61] "kube-vip-ha-163902-m02" [ccd47f0d-1f18-4146-808f-5436d25c859f] Running
	I0819 19:07:35.492019  452010 system_pods.go:61] "storage-provisioner" [05dffa5a-3372-4a79-94ad-33d14a4b7fd0] Running
	I0819 19:07:35.492029  452010 system_pods.go:74] duration metric: took 188.461842ms to wait for pod list to return data ...
	I0819 19:07:35.492044  452010 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:07:35.677485  452010 request.go:632] Waited for 185.337326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/default/serviceaccounts
	I0819 19:07:35.677576  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/default/serviceaccounts
	I0819 19:07:35.677584  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:35.677594  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:35.677601  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:35.682572  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:07:35.682838  452010 default_sa.go:45] found service account: "default"
	I0819 19:07:35.682856  452010 default_sa.go:55] duration metric: took 190.80577ms for default service account to be created ...
	I0819 19:07:35.682867  452010 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:07:35.877325  452010 request.go:632] Waited for 194.369278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:07:35.877408  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:07:35.877416  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:35.877428  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:35.877434  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:35.882295  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:07:35.887802  452010 system_pods.go:86] 17 kube-system pods found
	I0819 19:07:35.887838  452010 system_pods.go:89] "coredns-6f6b679f8f-nkths" [b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2] Running
	I0819 19:07:35.887844  452010 system_pods.go:89] "coredns-6f6b679f8f-wmp8k" [ca2ee9c4-992a-4251-a717-9843b7b41894] Running
	I0819 19:07:35.887849  452010 system_pods.go:89] "etcd-ha-163902" [fc3d22fd-d8b3-45ca-a6f6-9898285f85d4] Running
	I0819 19:07:35.887853  452010 system_pods.go:89] "etcd-ha-163902-m02" [f05c7c5e-de7b-4963-b897-204a95440e0d] Running
	I0819 19:07:35.887856  452010 system_pods.go:89] "kindnet-97cnn" [75284cfb-20b5-4675-b53e-db4130cc6722] Running
	I0819 19:07:35.887860  452010 system_pods.go:89] "kindnet-bpwjl" [624275c2-a670-4cc0-a11c-70f3e1b78946] Running
	I0819 19:07:35.887863  452010 system_pods.go:89] "kube-apiserver-ha-163902" [ab456c18-e3cf-462b-8028-55ddf9b36306] Running
	I0819 19:07:35.887867  452010 system_pods.go:89] "kube-apiserver-ha-163902-m02" [f11ec84f-77aa-4003-a786-f6c54b4c14bd] Running
	I0819 19:07:35.887870  452010 system_pods.go:89] "kube-controller-manager-ha-163902" [efeafbc7-7719-4632-9215-fd3c3ca09cc5] Running
	I0819 19:07:35.887874  452010 system_pods.go:89] "kube-controller-manager-ha-163902-m02" [2d80759a-8137-4e3c-a621-ba92532c9d9b] Running
	I0819 19:07:35.887877  452010 system_pods.go:89] "kube-proxy-4whvs" [b241f2e5-d83c-432b-bdce-c26940efa096] Running
	I0819 19:07:35.887880  452010 system_pods.go:89] "kube-proxy-wxrsv" [3d78c5e8-eed2-4da5-9425-76f96e2d8ed6] Running
	I0819 19:07:35.887883  452010 system_pods.go:89] "kube-scheduler-ha-163902" [ee9d8a96-56d9-4bc2-9829-bdf4a0ec0749] Running
	I0819 19:07:35.887889  452010 system_pods.go:89] "kube-scheduler-ha-163902-m02" [90a559e2-5c2a-4db8-b9d4-21362177d686] Running
	I0819 19:07:35.887894  452010 system_pods.go:89] "kube-vip-ha-163902" [3143571f-0eb1-4f9a-ada5-27fda4724d18] Running
	I0819 19:07:35.887899  452010 system_pods.go:89] "kube-vip-ha-163902-m02" [ccd47f0d-1f18-4146-808f-5436d25c859f] Running
	I0819 19:07:35.887903  452010 system_pods.go:89] "storage-provisioner" [05dffa5a-3372-4a79-94ad-33d14a4b7fd0] Running
	I0819 19:07:35.887913  452010 system_pods.go:126] duration metric: took 205.03521ms to wait for k8s-apps to be running ...
	I0819 19:07:35.887927  452010 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:07:35.887976  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:07:35.903872  452010 system_svc.go:56] duration metric: took 15.92984ms WaitForService to wait for kubelet
	I0819 19:07:35.903906  452010 kubeadm.go:582] duration metric: took 22.127369971s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:07:35.903927  452010 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:07:36.077399  452010 request.go:632] Waited for 173.365979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes
	I0819 19:07:36.077501  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes
	I0819 19:07:36.077508  452010 round_trippers.go:469] Request Headers:
	I0819 19:07:36.077519  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:07:36.077545  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:07:36.081185  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:07:36.081895  452010 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:07:36.081947  452010 node_conditions.go:123] node cpu capacity is 2
	I0819 19:07:36.081961  452010 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:07:36.081969  452010 node_conditions.go:123] node cpu capacity is 2
	I0819 19:07:36.081976  452010 node_conditions.go:105] duration metric: took 178.043214ms to run NodePressure ...
	I0819 19:07:36.081992  452010 start.go:241] waiting for startup goroutines ...
	I0819 19:07:36.082023  452010 start.go:255] writing updated cluster config ...
	I0819 19:07:36.084484  452010 out.go:201] 
	I0819 19:07:36.086155  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:07:36.086268  452010 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:07:36.088019  452010 out.go:177] * Starting "ha-163902-m03" control-plane node in "ha-163902" cluster
	I0819 19:07:36.089042  452010 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:07:36.089068  452010 cache.go:56] Caching tarball of preloaded images
	I0819 19:07:36.089224  452010 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:07:36.089237  452010 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 19:07:36.089368  452010 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:07:36.089608  452010 start.go:360] acquireMachinesLock for ha-163902-m03: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:07:36.089667  452010 start.go:364] duration metric: took 35.517µs to acquireMachinesLock for "ha-163902-m03"
	I0819 19:07:36.089692  452010 start.go:93] Provisioning new machine with config: &{Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:07:36.089832  452010 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0819 19:07:36.091440  452010 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 19:07:36.091555  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:07:36.091598  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:07:36.107125  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36545
	I0819 19:07:36.107690  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:07:36.108195  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:07:36.108219  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:07:36.108543  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:07:36.108692  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetMachineName
	I0819 19:07:36.108853  452010 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:07:36.108990  452010 start.go:159] libmachine.API.Create for "ha-163902" (driver="kvm2")
	I0819 19:07:36.109016  452010 client.go:168] LocalClient.Create starting
	I0819 19:07:36.109049  452010 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem
	I0819 19:07:36.109084  452010 main.go:141] libmachine: Decoding PEM data...
	I0819 19:07:36.109099  452010 main.go:141] libmachine: Parsing certificate...
	I0819 19:07:36.109171  452010 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem
	I0819 19:07:36.109195  452010 main.go:141] libmachine: Decoding PEM data...
	I0819 19:07:36.109206  452010 main.go:141] libmachine: Parsing certificate...
	I0819 19:07:36.109243  452010 main.go:141] libmachine: Running pre-create checks...
	I0819 19:07:36.109252  452010 main.go:141] libmachine: (ha-163902-m03) Calling .PreCreateCheck
	I0819 19:07:36.109410  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetConfigRaw
	I0819 19:07:36.109742  452010 main.go:141] libmachine: Creating machine...
	I0819 19:07:36.109756  452010 main.go:141] libmachine: (ha-163902-m03) Calling .Create
	I0819 19:07:36.109928  452010 main.go:141] libmachine: (ha-163902-m03) Creating KVM machine...
	I0819 19:07:36.111348  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found existing default KVM network
	I0819 19:07:36.111504  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found existing private KVM network mk-ha-163902
	I0819 19:07:36.111700  452010 main.go:141] libmachine: (ha-163902-m03) Setting up store path in /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03 ...
	I0819 19:07:36.111728  452010 main.go:141] libmachine: (ha-163902-m03) Building disk image from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 19:07:36.111823  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:36.111693  452804 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:07:36.111932  452010 main.go:141] libmachine: (ha-163902-m03) Downloading /home/jenkins/minikube-integration/19423-430949/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 19:07:36.400593  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:36.400468  452804 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa...
	I0819 19:07:36.505423  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:36.505277  452804 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/ha-163902-m03.rawdisk...
	I0819 19:07:36.505469  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Writing magic tar header
	I0819 19:07:36.505526  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Writing SSH key tar header
	I0819 19:07:36.505560  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:36.505423  452804 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03 ...
	I0819 19:07:36.505589  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03
	I0819 19:07:36.505605  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines
	I0819 19:07:36.505623  452010 main.go:141] libmachine: (ha-163902-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03 (perms=drwx------)
	I0819 19:07:36.505638  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:07:36.505652  452010 main.go:141] libmachine: (ha-163902-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines (perms=drwxr-xr-x)
	I0819 19:07:36.505673  452010 main.go:141] libmachine: (ha-163902-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube (perms=drwxr-xr-x)
	I0819 19:07:36.505688  452010 main.go:141] libmachine: (ha-163902-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949 (perms=drwxrwxr-x)
	I0819 19:07:36.505702  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949
	I0819 19:07:36.505718  452010 main.go:141] libmachine: (ha-163902-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 19:07:36.505731  452010 main.go:141] libmachine: (ha-163902-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 19:07:36.505745  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 19:07:36.505760  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Checking permissions on dir: /home/jenkins
	I0819 19:07:36.505778  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Checking permissions on dir: /home
	I0819 19:07:36.505790  452010 main.go:141] libmachine: (ha-163902-m03) Creating domain...
	I0819 19:07:36.505807  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Skipping /home - not owner
	I0819 19:07:36.506793  452010 main.go:141] libmachine: (ha-163902-m03) define libvirt domain using xml: 
	I0819 19:07:36.506815  452010 main.go:141] libmachine: (ha-163902-m03) <domain type='kvm'>
	I0819 19:07:36.506826  452010 main.go:141] libmachine: (ha-163902-m03)   <name>ha-163902-m03</name>
	I0819 19:07:36.506837  452010 main.go:141] libmachine: (ha-163902-m03)   <memory unit='MiB'>2200</memory>
	I0819 19:07:36.506850  452010 main.go:141] libmachine: (ha-163902-m03)   <vcpu>2</vcpu>
	I0819 19:07:36.506860  452010 main.go:141] libmachine: (ha-163902-m03)   <features>
	I0819 19:07:36.506870  452010 main.go:141] libmachine: (ha-163902-m03)     <acpi/>
	I0819 19:07:36.506878  452010 main.go:141] libmachine: (ha-163902-m03)     <apic/>
	I0819 19:07:36.506886  452010 main.go:141] libmachine: (ha-163902-m03)     <pae/>
	I0819 19:07:36.506895  452010 main.go:141] libmachine: (ha-163902-m03)     
	I0819 19:07:36.506905  452010 main.go:141] libmachine: (ha-163902-m03)   </features>
	I0819 19:07:36.506918  452010 main.go:141] libmachine: (ha-163902-m03)   <cpu mode='host-passthrough'>
	I0819 19:07:36.506929  452010 main.go:141] libmachine: (ha-163902-m03)   
	I0819 19:07:36.506938  452010 main.go:141] libmachine: (ha-163902-m03)   </cpu>
	I0819 19:07:36.506946  452010 main.go:141] libmachine: (ha-163902-m03)   <os>
	I0819 19:07:36.506962  452010 main.go:141] libmachine: (ha-163902-m03)     <type>hvm</type>
	I0819 19:07:36.506972  452010 main.go:141] libmachine: (ha-163902-m03)     <boot dev='cdrom'/>
	I0819 19:07:36.506982  452010 main.go:141] libmachine: (ha-163902-m03)     <boot dev='hd'/>
	I0819 19:07:36.506994  452010 main.go:141] libmachine: (ha-163902-m03)     <bootmenu enable='no'/>
	I0819 19:07:36.507002  452010 main.go:141] libmachine: (ha-163902-m03)   </os>
	I0819 19:07:36.507013  452010 main.go:141] libmachine: (ha-163902-m03)   <devices>
	I0819 19:07:36.507024  452010 main.go:141] libmachine: (ha-163902-m03)     <disk type='file' device='cdrom'>
	I0819 19:07:36.507041  452010 main.go:141] libmachine: (ha-163902-m03)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/boot2docker.iso'/>
	I0819 19:07:36.507057  452010 main.go:141] libmachine: (ha-163902-m03)       <target dev='hdc' bus='scsi'/>
	I0819 19:07:36.507068  452010 main.go:141] libmachine: (ha-163902-m03)       <readonly/>
	I0819 19:07:36.507077  452010 main.go:141] libmachine: (ha-163902-m03)     </disk>
	I0819 19:07:36.507090  452010 main.go:141] libmachine: (ha-163902-m03)     <disk type='file' device='disk'>
	I0819 19:07:36.507099  452010 main.go:141] libmachine: (ha-163902-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 19:07:36.507110  452010 main.go:141] libmachine: (ha-163902-m03)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/ha-163902-m03.rawdisk'/>
	I0819 19:07:36.507117  452010 main.go:141] libmachine: (ha-163902-m03)       <target dev='hda' bus='virtio'/>
	I0819 19:07:36.507122  452010 main.go:141] libmachine: (ha-163902-m03)     </disk>
	I0819 19:07:36.507131  452010 main.go:141] libmachine: (ha-163902-m03)     <interface type='network'>
	I0819 19:07:36.507137  452010 main.go:141] libmachine: (ha-163902-m03)       <source network='mk-ha-163902'/>
	I0819 19:07:36.507144  452010 main.go:141] libmachine: (ha-163902-m03)       <model type='virtio'/>
	I0819 19:07:36.507152  452010 main.go:141] libmachine: (ha-163902-m03)     </interface>
	I0819 19:07:36.507157  452010 main.go:141] libmachine: (ha-163902-m03)     <interface type='network'>
	I0819 19:07:36.507171  452010 main.go:141] libmachine: (ha-163902-m03)       <source network='default'/>
	I0819 19:07:36.507178  452010 main.go:141] libmachine: (ha-163902-m03)       <model type='virtio'/>
	I0819 19:07:36.507184  452010 main.go:141] libmachine: (ha-163902-m03)     </interface>
	I0819 19:07:36.507190  452010 main.go:141] libmachine: (ha-163902-m03)     <serial type='pty'>
	I0819 19:07:36.507195  452010 main.go:141] libmachine: (ha-163902-m03)       <target port='0'/>
	I0819 19:07:36.507205  452010 main.go:141] libmachine: (ha-163902-m03)     </serial>
	I0819 19:07:36.507211  452010 main.go:141] libmachine: (ha-163902-m03)     <console type='pty'>
	I0819 19:07:36.507220  452010 main.go:141] libmachine: (ha-163902-m03)       <target type='serial' port='0'/>
	I0819 19:07:36.507226  452010 main.go:141] libmachine: (ha-163902-m03)     </console>
	I0819 19:07:36.507232  452010 main.go:141] libmachine: (ha-163902-m03)     <rng model='virtio'>
	I0819 19:07:36.507241  452010 main.go:141] libmachine: (ha-163902-m03)       <backend model='random'>/dev/random</backend>
	I0819 19:07:36.507248  452010 main.go:141] libmachine: (ha-163902-m03)     </rng>
	I0819 19:07:36.507281  452010 main.go:141] libmachine: (ha-163902-m03)     
	I0819 19:07:36.507305  452010 main.go:141] libmachine: (ha-163902-m03)     
	I0819 19:07:36.507323  452010 main.go:141] libmachine: (ha-163902-m03)   </devices>
	I0819 19:07:36.507335  452010 main.go:141] libmachine: (ha-163902-m03) </domain>
	I0819 19:07:36.507348  452010 main.go:141] libmachine: (ha-163902-m03) 
	I0819 19:07:36.514621  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:01:59:d5 in network default
	I0819 19:07:36.515251  452010 main.go:141] libmachine: (ha-163902-m03) Ensuring networks are active...
	I0819 19:07:36.515280  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:36.516061  452010 main.go:141] libmachine: (ha-163902-m03) Ensuring network default is active
	I0819 19:07:36.516373  452010 main.go:141] libmachine: (ha-163902-m03) Ensuring network mk-ha-163902 is active
	I0819 19:07:36.516729  452010 main.go:141] libmachine: (ha-163902-m03) Getting domain xml...
	I0819 19:07:36.517399  452010 main.go:141] libmachine: (ha-163902-m03) Creating domain...
	I0819 19:07:37.778280  452010 main.go:141] libmachine: (ha-163902-m03) Waiting to get IP...
	I0819 19:07:37.778990  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:37.779391  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:37.779424  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:37.779376  452804 retry.go:31] will retry after 201.989618ms: waiting for machine to come up
	I0819 19:07:37.982964  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:37.983443  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:37.983475  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:37.983388  452804 retry.go:31] will retry after 261.868176ms: waiting for machine to come up
	I0819 19:07:38.247079  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:38.247579  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:38.247614  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:38.247531  452804 retry.go:31] will retry after 461.578514ms: waiting for machine to come up
	I0819 19:07:38.711258  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:38.711717  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:38.711748  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:38.711682  452804 retry.go:31] will retry after 459.351794ms: waiting for machine to come up
	I0819 19:07:39.172292  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:39.172698  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:39.172726  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:39.172651  452804 retry.go:31] will retry after 511.700799ms: waiting for machine to come up
	I0819 19:07:39.686535  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:39.686958  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:39.686991  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:39.686913  452804 retry.go:31] will retry after 731.052181ms: waiting for machine to come up
	I0819 19:07:40.419905  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:40.420410  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:40.420439  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:40.420350  452804 retry.go:31] will retry after 818.727574ms: waiting for machine to come up
	I0819 19:07:41.240939  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:41.241384  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:41.241410  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:41.241347  452804 retry.go:31] will retry after 1.138879364s: waiting for machine to come up
	I0819 19:07:42.382012  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:42.382402  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:42.382429  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:42.382370  452804 retry.go:31] will retry after 1.474683081s: waiting for machine to come up
	I0819 19:07:43.858547  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:43.859046  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:43.859077  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:43.858993  452804 retry.go:31] will retry after 1.583490461s: waiting for machine to come up
	I0819 19:07:45.444669  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:45.445085  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:45.445109  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:45.445037  452804 retry.go:31] will retry after 2.780886536s: waiting for machine to come up
	I0819 19:07:48.227136  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:48.227508  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:48.227544  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:48.227451  452804 retry.go:31] will retry after 3.081211101s: waiting for machine to come up
	I0819 19:07:51.310606  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:51.311119  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:51.311149  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:51.311040  452804 retry.go:31] will retry after 4.021238642s: waiting for machine to come up
	I0819 19:07:55.336313  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:55.336861  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find current IP address of domain ha-163902-m03 in network mk-ha-163902
	I0819 19:07:55.336892  452010 main.go:141] libmachine: (ha-163902-m03) DBG | I0819 19:07:55.336810  452804 retry.go:31] will retry after 4.178616831s: waiting for machine to come up
	I0819 19:07:59.519446  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.519840  452010 main.go:141] libmachine: (ha-163902-m03) Found IP for machine: 192.168.39.59
	I0819 19:07:59.519869  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has current primary IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.519880  452010 main.go:141] libmachine: (ha-163902-m03) Reserving static IP address...
	I0819 19:07:59.520257  452010 main.go:141] libmachine: (ha-163902-m03) DBG | unable to find host DHCP lease matching {name: "ha-163902-m03", mac: "52:54:00:64:e1:28", ip: "192.168.39.59"} in network mk-ha-163902
	I0819 19:07:59.602930  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Getting to WaitForSSH function...
	I0819 19:07:59.602963  452010 main.go:141] libmachine: (ha-163902-m03) Reserved static IP address: 192.168.39.59
	I0819 19:07:59.602977  452010 main.go:141] libmachine: (ha-163902-m03) Waiting for SSH to be available...
	I0819 19:07:59.605508  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.605880  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:minikube Clientid:01:52:54:00:64:e1:28}
	I0819 19:07:59.605912  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.606089  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Using SSH client type: external
	I0819 19:07:59.606124  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa (-rw-------)
	I0819 19:07:59.606186  452010 main.go:141] libmachine: (ha-163902-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:07:59.606211  452010 main.go:141] libmachine: (ha-163902-m03) DBG | About to run SSH command:
	I0819 19:07:59.606230  452010 main.go:141] libmachine: (ha-163902-m03) DBG | exit 0
	I0819 19:07:59.729073  452010 main.go:141] libmachine: (ha-163902-m03) DBG | SSH cmd err, output: <nil>: 
	I0819 19:07:59.729371  452010 main.go:141] libmachine: (ha-163902-m03) KVM machine creation complete!
	I0819 19:07:59.729687  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetConfigRaw
	I0819 19:07:59.730238  452010 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:07:59.730492  452010 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:07:59.730750  452010 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 19:07:59.730777  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetState
	I0819 19:07:59.732125  452010 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 19:07:59.732138  452010 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 19:07:59.732145  452010 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 19:07:59.732150  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:07:59.734706  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.735097  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:07:59.735132  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.735278  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:07:59.735490  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:07:59.735645  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:07:59.735786  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:07:59.735964  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:07:59.736214  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0819 19:07:59.736231  452010 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 19:07:59.836625  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:07:59.836650  452010 main.go:141] libmachine: Detecting the provisioner...
	I0819 19:07:59.836658  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:07:59.839611  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.840018  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:07:59.840041  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.840200  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:07:59.840468  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:07:59.840644  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:07:59.840787  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:07:59.840990  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:07:59.841246  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0819 19:07:59.841266  452010 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 19:07:59.941671  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 19:07:59.941744  452010 main.go:141] libmachine: found compatible host: buildroot
	I0819 19:07:59.941758  452010 main.go:141] libmachine: Provisioning with buildroot...
	I0819 19:07:59.941770  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetMachineName
	I0819 19:07:59.942040  452010 buildroot.go:166] provisioning hostname "ha-163902-m03"
	I0819 19:07:59.942064  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetMachineName
	I0819 19:07:59.942238  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:07:59.944835  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.945391  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:07:59.945424  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:07:59.945655  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:07:59.945903  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:07:59.946061  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:07:59.946259  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:07:59.946454  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:07:59.946678  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0819 19:07:59.946696  452010 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-163902-m03 && echo "ha-163902-m03" | sudo tee /etc/hostname
	I0819 19:08:00.059346  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-163902-m03
	
	I0819 19:08:00.059388  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:08:00.062867  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.063351  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:00.063386  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.063610  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:08:00.063854  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:00.064068  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:00.064291  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:08:00.064511  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:08:00.064741  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0819 19:08:00.064766  452010 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-163902-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-163902-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-163902-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:08:00.174242  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:08:00.174275  452010 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 19:08:00.174292  452010 buildroot.go:174] setting up certificates
	I0819 19:08:00.174303  452010 provision.go:84] configureAuth start
	I0819 19:08:00.174312  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetMachineName
	I0819 19:08:00.174651  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetIP
	I0819 19:08:00.177712  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.178225  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:00.178257  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.178434  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:08:00.181016  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.181423  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:00.181460  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.181697  452010 provision.go:143] copyHostCerts
	I0819 19:08:00.181736  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:08:00.181774  452010 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 19:08:00.181782  452010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:08:00.181847  452010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 19:08:00.181932  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:08:00.181954  452010 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 19:08:00.181959  452010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:08:00.181993  452010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 19:08:00.182058  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:08:00.182077  452010 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 19:08:00.182084  452010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:08:00.182111  452010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 19:08:00.182188  452010 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.ha-163902-m03 san=[127.0.0.1 192.168.39.59 ha-163902-m03 localhost minikube]
	I0819 19:08:00.339541  452010 provision.go:177] copyRemoteCerts
	I0819 19:08:00.339611  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:08:00.339642  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:08:00.342788  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.343151  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:00.343183  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.343382  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:08:00.343619  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:00.343811  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:08:00.343949  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa Username:docker}
	I0819 19:08:00.427994  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 19:08:00.428096  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:08:00.453164  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 19:08:00.453264  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 19:08:00.478133  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 19:08:00.478226  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 19:08:00.503354  452010 provision.go:87] duration metric: took 329.03716ms to configureAuth
	I0819 19:08:00.503389  452010 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:08:00.503592  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:08:00.503669  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:08:00.506412  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.506727  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:00.506761  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.506986  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:08:00.507176  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:00.507347  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:00.507478  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:08:00.507662  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:08:00.507842  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0819 19:08:00.507857  452010 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:08:00.766314  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:08:00.766349  452010 main.go:141] libmachine: Checking connection to Docker...
	I0819 19:08:00.766359  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetURL
	I0819 19:08:00.767607  452010 main.go:141] libmachine: (ha-163902-m03) DBG | Using libvirt version 6000000
	I0819 19:08:00.769654  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.769940  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:00.769965  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.770114  452010 main.go:141] libmachine: Docker is up and running!
	I0819 19:08:00.770130  452010 main.go:141] libmachine: Reticulating splines...
	I0819 19:08:00.770139  452010 client.go:171] duration metric: took 24.661112457s to LocalClient.Create
	I0819 19:08:00.770168  452010 start.go:167] duration metric: took 24.661176781s to libmachine.API.Create "ha-163902"
	I0819 19:08:00.770181  452010 start.go:293] postStartSetup for "ha-163902-m03" (driver="kvm2")
	I0819 19:08:00.770194  452010 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:08:00.770251  452010 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:08:00.770522  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:08:00.770547  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:08:00.772714  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.773038  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:00.773063  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.773284  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:08:00.773490  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:00.773670  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:08:00.773823  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa Username:docker}
	I0819 19:08:00.855905  452010 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:08:00.860522  452010 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:08:00.860561  452010 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 19:08:00.860637  452010 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 19:08:00.860711  452010 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 19:08:00.860723  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /etc/ssl/certs/4381592.pem
	I0819 19:08:00.860806  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:08:00.870943  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:08:00.896463  452010 start.go:296] duration metric: took 126.241228ms for postStartSetup
	I0819 19:08:00.896533  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetConfigRaw
	I0819 19:08:00.897179  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetIP
	I0819 19:08:00.900265  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.900710  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:00.900740  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.901076  452010 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:08:00.901336  452010 start.go:128] duration metric: took 24.811490278s to createHost
	I0819 19:08:00.901363  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:08:00.904010  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.904443  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:00.904482  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:00.904708  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:08:00.904944  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:00.905158  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:00.905329  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:08:00.905516  452010 main.go:141] libmachine: Using SSH client type: native
	I0819 19:08:00.905693  452010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0819 19:08:00.905705  452010 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:08:01.005651  452010 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094480.983338337
	
	I0819 19:08:01.005682  452010 fix.go:216] guest clock: 1724094480.983338337
	I0819 19:08:01.005691  452010 fix.go:229] Guest: 2024-08-19 19:08:00.983338337 +0000 UTC Remote: 2024-08-19 19:08:00.90135049 +0000 UTC m=+149.520267210 (delta=81.987847ms)
	I0819 19:08:01.005713  452010 fix.go:200] guest clock delta is within tolerance: 81.987847ms
	I0819 19:08:01.005719  452010 start.go:83] releasing machines lock for "ha-163902-m03", held for 24.916039308s
	I0819 19:08:01.005738  452010 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:08:01.006030  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetIP
	I0819 19:08:01.008918  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:01.009375  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:01.009408  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:01.011766  452010 out.go:177] * Found network options:
	I0819 19:08:01.013204  452010 out.go:177]   - NO_PROXY=192.168.39.227,192.168.39.162
	W0819 19:08:01.014661  452010 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 19:08:01.014721  452010 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 19:08:01.014747  452010 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:08:01.015512  452010 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:08:01.015734  452010 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:08:01.015852  452010 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:08:01.015895  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	W0819 19:08:01.015907  452010 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 19:08:01.015930  452010 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 19:08:01.015999  452010 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:08:01.016019  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:08:01.018993  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:01.019218  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:01.019377  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:01.019409  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:01.019544  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:08:01.019630  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:01.019658  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:01.019780  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:01.019867  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:08:01.019946  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:08:01.020119  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:08:01.020121  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa Username:docker}
	I0819 19:08:01.020268  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:08:01.020406  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa Username:docker}
	I0819 19:08:01.256605  452010 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:08:01.262429  452010 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:08:01.262510  452010 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:08:01.279148  452010 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:08:01.279178  452010 start.go:495] detecting cgroup driver to use...
	I0819 19:08:01.279279  452010 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:08:01.295140  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:08:01.310453  452010 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:08:01.310548  452010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:08:01.325144  452010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:08:01.339258  452010 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:08:01.457252  452010 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:08:01.608271  452010 docker.go:233] disabling docker service ...
	I0819 19:08:01.608362  452010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:08:01.623400  452010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:08:01.636827  452010 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:08:01.763505  452010 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:08:01.888305  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:08:01.904137  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:08:01.925244  452010 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:08:01.925338  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:08:01.936413  452010 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:08:01.936497  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:08:01.947087  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:08:01.958019  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:08:01.968792  452010 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:08:01.979506  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:08:01.989753  452010 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:08:02.008365  452010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:08:02.018807  452010 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:08:02.028653  452010 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:08:02.028726  452010 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:08:02.041766  452010 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:08:02.051544  452010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:08:02.174077  452010 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:08:02.317778  452010 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:08:02.317862  452010 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:08:02.322416  452010 start.go:563] Will wait 60s for crictl version
	I0819 19:08:02.322484  452010 ssh_runner.go:195] Run: which crictl
	I0819 19:08:02.326570  452010 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:08:02.365977  452010 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:08:02.366079  452010 ssh_runner.go:195] Run: crio --version
	I0819 19:08:02.394133  452010 ssh_runner.go:195] Run: crio --version
	I0819 19:08:02.429162  452010 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:08:02.430494  452010 out.go:177]   - env NO_PROXY=192.168.39.227
	I0819 19:08:02.431834  452010 out.go:177]   - env NO_PROXY=192.168.39.227,192.168.39.162
	I0819 19:08:02.432993  452010 main.go:141] libmachine: (ha-163902-m03) Calling .GetIP
	I0819 19:08:02.435949  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:02.436345  452010 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:08:02.436374  452010 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:08:02.436663  452010 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 19:08:02.440966  452010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:08:02.454241  452010 mustload.go:65] Loading cluster: ha-163902
	I0819 19:08:02.454555  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:08:02.454969  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:02.455036  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:02.470591  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42385
	I0819 19:08:02.471041  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:02.471544  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:08:02.471567  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:02.471953  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:02.472185  452010 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:08:02.473914  452010 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:08:02.474219  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:02.474266  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:02.489232  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38515
	I0819 19:08:02.489782  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:02.490298  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:08:02.490325  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:02.490705  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:02.490980  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:08:02.491183  452010 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902 for IP: 192.168.39.59
	I0819 19:08:02.491198  452010 certs.go:194] generating shared ca certs ...
	I0819 19:08:02.491220  452010 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:08:02.491389  452010 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 19:08:02.491466  452010 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 19:08:02.491481  452010 certs.go:256] generating profile certs ...
	I0819 19:08:02.491571  452010 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.key
	I0819 19:08:02.491604  452010 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.6e37453b
	I0819 19:08:02.491619  452010 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.6e37453b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227 192.168.39.162 192.168.39.59 192.168.39.254]
	I0819 19:08:02.699925  452010 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.6e37453b ...
	I0819 19:08:02.699960  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.6e37453b: {Name:mkdb2ac70439b3fafaf57c897ab119c81d9f16b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:08:02.700137  452010 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.6e37453b ...
	I0819 19:08:02.700151  452010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.6e37453b: {Name:mkdb82289c5f550445a85b6895e8f4b5e0088fb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:08:02.700223  452010 certs.go:381] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.6e37453b -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt
	I0819 19:08:02.700358  452010 certs.go:385] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.6e37453b -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key
	I0819 19:08:02.700484  452010 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key
	I0819 19:08:02.700506  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 19:08:02.700524  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 19:08:02.700538  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 19:08:02.700553  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 19:08:02.700567  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 19:08:02.700589  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 19:08:02.700607  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 19:08:02.700621  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 19:08:02.700683  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 19:08:02.700726  452010 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 19:08:02.700740  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 19:08:02.700773  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:08:02.700805  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:08:02.700839  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 19:08:02.700896  452010 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:08:02.700936  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:08:02.700957  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem -> /usr/share/ca-certificates/438159.pem
	I0819 19:08:02.700975  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /usr/share/ca-certificates/4381592.pem
	I0819 19:08:02.701021  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:08:02.704616  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:08:02.705072  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:08:02.705107  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:08:02.705303  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:08:02.705557  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:08:02.705744  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:08:02.705871  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:08:02.777624  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 19:08:02.782262  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 19:08:02.793290  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 19:08:02.797370  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0819 19:08:02.808750  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 19:08:02.813037  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 19:08:02.824062  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 19:08:02.829115  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 19:08:02.844911  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 19:08:02.849690  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 19:08:02.861549  452010 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 19:08:02.865899  452010 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 19:08:02.877670  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:08:02.902542  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:08:02.927413  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:08:02.952642  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 19:08:02.976866  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0819 19:08:03.001679  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:08:03.025547  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:08:03.050013  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:08:03.073976  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:08:03.098015  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 19:08:03.122121  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 19:08:03.146271  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 19:08:03.163290  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0819 19:08:03.179944  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 19:08:03.196597  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 19:08:03.214309  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 19:08:03.231234  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 19:08:03.248232  452010 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 19:08:03.264984  452010 ssh_runner.go:195] Run: openssl version
	I0819 19:08:03.270688  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:08:03.281587  452010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:08:03.286125  452010 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:08:03.286220  452010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:08:03.291949  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:08:03.303301  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 19:08:03.314768  452010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 19:08:03.319802  452010 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 19:08:03.319875  452010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 19:08:03.326008  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 19:08:03.336795  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 19:08:03.347810  452010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 19:08:03.352483  452010 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 19:08:03.352572  452010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 19:08:03.358332  452010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:08:03.370380  452010 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:08:03.374810  452010 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 19:08:03.374889  452010 kubeadm.go:934] updating node {m03 192.168.39.59 8443 v1.31.0 crio true true} ...
	I0819 19:08:03.375006  452010 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-163902-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:08:03.375041  452010 kube-vip.go:115] generating kube-vip config ...
	I0819 19:08:03.375096  452010 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 19:08:03.390821  452010 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 19:08:03.390918  452010 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 19:08:03.390999  452010 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:08:03.401323  452010 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 19:08:03.401404  452010 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 19:08:03.413899  452010 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 19:08:03.413934  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 19:08:03.413935  452010 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0819 19:08:03.413940  452010 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0819 19:08:03.413955  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 19:08:03.414004  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:08:03.414015  452010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 19:08:03.414015  452010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 19:08:03.421878  452010 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 19:08:03.421923  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 19:08:03.449151  452010 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 19:08:03.449194  452010 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 19:08:03.449253  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 19:08:03.449353  452010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 19:08:03.502848  452010 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 19:08:03.502899  452010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 19:08:04.290555  452010 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 19:08:04.300247  452010 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0819 19:08:04.318384  452010 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:08:04.336121  452010 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 19:08:04.353588  452010 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 19:08:04.358137  452010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:08:04.371117  452010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:08:04.499688  452010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:08:04.516516  452010 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:08:04.517014  452010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:04.517065  452010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:04.533118  452010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40889
	I0819 19:08:04.533627  452010 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:04.534203  452010 main.go:141] libmachine: Using API Version  1
	I0819 19:08:04.534229  452010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:04.534580  452010 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:04.534780  452010 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:08:04.534971  452010 start.go:317] joinCluster: &{Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.59 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:08:04.535149  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 19:08:04.535171  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:08:04.538452  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:08:04.538860  452010 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:08:04.538887  452010 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:08:04.539128  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:08:04.539341  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:08:04.539540  452010 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:08:04.539709  452010 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:08:04.686782  452010 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.59 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:08:04.686841  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3w4elc.e4uij2tmkcoo2axg --discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-163902-m03 --control-plane --apiserver-advertise-address=192.168.39.59 --apiserver-bind-port=8443"
	I0819 19:08:25.234963  452010 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3w4elc.e4uij2tmkcoo2axg --discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-163902-m03 --control-plane --apiserver-advertise-address=192.168.39.59 --apiserver-bind-port=8443": (20.548098323s)
	I0819 19:08:25.235003  452010 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 19:08:25.823383  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-163902-m03 minikube.k8s.io/updated_at=2024_08_19T19_08_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8 minikube.k8s.io/name=ha-163902 minikube.k8s.io/primary=false
	I0819 19:08:25.933989  452010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-163902-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 19:08:26.057761  452010 start.go:319] duration metric: took 21.522783925s to joinCluster
	I0819 19:08:26.057846  452010 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.59 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:08:26.058174  452010 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:08:26.060509  452010 out.go:177] * Verifying Kubernetes components...
	I0819 19:08:26.061862  452010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:08:26.350042  452010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:08:26.381179  452010 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:08:26.381576  452010 kapi.go:59] client config for ha-163902: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.key", CAFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 19:08:26.381668  452010 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.227:8443
	I0819 19:08:26.381963  452010 node_ready.go:35] waiting up to 6m0s for node "ha-163902-m03" to be "Ready" ...
	I0819 19:08:26.382071  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:26.382081  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:26.382092  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:26.382100  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:26.385928  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:26.883195  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:26.883227  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:26.883239  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:26.883246  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:26.886826  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:27.382499  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:27.382538  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:27.382548  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:27.382552  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:27.387767  452010 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 19:08:27.882159  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:27.882184  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:27.882195  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:27.882201  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:27.885513  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:28.383264  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:28.383291  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:28.383302  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:28.383309  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:28.387040  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:28.387555  452010 node_ready.go:53] node "ha-163902-m03" has status "Ready":"False"
	I0819 19:08:28.883000  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:28.883028  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:28.883037  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:28.883041  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:28.885958  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:08:29.382390  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:29.382417  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:29.382428  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:29.382436  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:29.388004  452010 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 19:08:29.882436  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:29.882465  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:29.882477  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:29.882483  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:29.886513  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:08:30.382185  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:30.382210  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:30.382218  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:30.382222  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:30.385984  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:30.882401  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:30.882424  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:30.882434  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:30.882437  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:30.886101  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:30.886651  452010 node_ready.go:53] node "ha-163902-m03" has status "Ready":"False"
	I0819 19:08:31.383169  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:31.383197  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:31.383204  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:31.383208  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:31.392558  452010 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0819 19:08:31.882538  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:31.882567  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:31.882579  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:31.882584  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:31.886248  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:32.382331  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:32.382366  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:32.382380  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:32.382385  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:32.413073  452010 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0819 19:08:32.883122  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:32.883147  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:32.883155  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:32.883161  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:32.886449  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:32.887070  452010 node_ready.go:53] node "ha-163902-m03" has status "Ready":"False"
	I0819 19:08:33.382309  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:33.382336  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:33.382345  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:33.382349  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:33.387650  452010 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 19:08:33.882318  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:33.882340  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:33.882361  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:33.882374  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:33.885700  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:34.383092  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:34.383118  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:34.383127  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:34.383131  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:34.386669  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:34.883193  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:34.883225  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:34.883236  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:34.883245  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:34.886875  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:34.887594  452010 node_ready.go:53] node "ha-163902-m03" has status "Ready":"False"
	I0819 19:08:35.382883  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:35.382908  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:35.382919  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:35.382924  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:35.388306  452010 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 19:08:35.883136  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:35.883161  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:35.883172  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:35.883179  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:35.886966  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:36.382409  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:36.382439  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:36.382449  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:36.382454  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:36.386669  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:08:36.882948  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:36.882979  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:36.882991  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:36.882999  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:36.887423  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:08:36.888586  452010 node_ready.go:53] node "ha-163902-m03" has status "Ready":"False"
	I0819 19:08:37.382997  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:37.383023  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:37.383031  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:37.383036  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:37.388940  452010 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 19:08:37.882907  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:37.882932  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:37.882943  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:37.882949  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:37.886760  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:38.382378  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:38.382404  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:38.382412  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:38.382415  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:38.386159  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:38.882979  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:38.883002  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:38.883013  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:38.883016  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:38.886175  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:39.382642  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:39.382667  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:39.382678  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:39.382684  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:39.388121  452010 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 19:08:39.388749  452010 node_ready.go:53] node "ha-163902-m03" has status "Ready":"False"
	I0819 19:08:39.882457  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:39.882484  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:39.882496  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:39.882499  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:39.885976  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:40.383115  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:40.383146  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:40.383158  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:40.383165  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:40.386943  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:40.882982  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:40.883008  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:40.883017  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:40.883021  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:40.886577  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:41.382955  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:41.383040  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:41.383057  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:41.383067  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:41.393526  452010 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0819 19:08:41.394054  452010 node_ready.go:53] node "ha-163902-m03" has status "Ready":"False"
	I0819 19:08:41.882909  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:41.882934  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:41.882942  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:41.882946  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:41.886816  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:42.382558  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:42.382585  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:42.382593  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:42.382600  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:42.386198  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:42.883124  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:42.883150  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:42.883160  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:42.883165  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:42.887151  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:43.383139  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:43.383164  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.383172  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.383176  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.389641  452010 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 19:08:43.882399  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:43.882419  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.882431  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.882434  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.885368  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:08:43.885959  452010 node_ready.go:49] node "ha-163902-m03" has status "Ready":"True"
	I0819 19:08:43.885980  452010 node_ready.go:38] duration metric: took 17.503998711s for node "ha-163902-m03" to be "Ready" ...
	I0819 19:08:43.885989  452010 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:08:43.886052  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:08:43.886061  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.886068  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.886078  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.894167  452010 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0819 19:08:43.902800  452010 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nkths" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:43.902922  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-nkths
	I0819 19:08:43.902933  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.902945  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.902955  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.913178  452010 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0819 19:08:43.914107  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:43.914131  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.914140  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.914146  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.923550  452010 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0819 19:08:43.924109  452010 pod_ready.go:93] pod "coredns-6f6b679f8f-nkths" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:43.924137  452010 pod_ready.go:82] duration metric: took 21.302191ms for pod "coredns-6f6b679f8f-nkths" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:43.924152  452010 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-wmp8k" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:43.924241  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-wmp8k
	I0819 19:08:43.924252  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.924262  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.924268  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.935934  452010 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0819 19:08:43.936762  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:43.936790  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.936802  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.936810  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.942538  452010 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 19:08:43.943140  452010 pod_ready.go:93] pod "coredns-6f6b679f8f-wmp8k" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:43.943168  452010 pod_ready.go:82] duration metric: took 19.008048ms for pod "coredns-6f6b679f8f-wmp8k" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:43.943182  452010 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:43.943271  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-163902
	I0819 19:08:43.943281  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.943291  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.943298  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.955730  452010 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0819 19:08:43.956370  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:43.956390  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.956397  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.956413  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.964228  452010 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 19:08:43.964868  452010 pod_ready.go:93] pod "etcd-ha-163902" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:43.964892  452010 pod_ready.go:82] duration metric: took 21.699653ms for pod "etcd-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:43.964906  452010 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:43.964984  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-163902-m02
	I0819 19:08:43.964993  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.965000  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.965007  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.967866  452010 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 19:08:43.968384  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:08:43.968400  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:43.968410  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:43.968417  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:43.971446  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:43.971896  452010 pod_ready.go:93] pod "etcd-ha-163902-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:43.971915  452010 pod_ready.go:82] duration metric: took 7.00153ms for pod "etcd-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:43.971926  452010 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-163902-m03" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:44.083312  452010 request.go:632] Waited for 111.279722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-163902-m03
	I0819 19:08:44.083379  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-163902-m03
	I0819 19:08:44.083384  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:44.083392  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:44.083403  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:44.087420  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:44.283432  452010 request.go:632] Waited for 195.380757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:44.283539  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:44.283548  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:44.283566  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:44.283578  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:44.286995  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:44.287527  452010 pod_ready.go:93] pod "etcd-ha-163902-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:44.287548  452010 pod_ready.go:82] duration metric: took 315.616421ms for pod "etcd-ha-163902-m03" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:44.287566  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:44.482809  452010 request.go:632] Waited for 195.156559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902
	I0819 19:08:44.482879  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902
	I0819 19:08:44.482885  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:44.482893  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:44.482898  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:44.486578  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:44.683016  452010 request.go:632] Waited for 195.47481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:44.683117  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:44.683128  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:44.683141  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:44.683151  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:44.687428  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:08:44.688462  452010 pod_ready.go:93] pod "kube-apiserver-ha-163902" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:44.688487  452010 pod_ready.go:82] duration metric: took 400.913719ms for pod "kube-apiserver-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:44.688500  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:44.882696  452010 request.go:632] Waited for 194.112883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902-m02
	I0819 19:08:44.882790  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902-m02
	I0819 19:08:44.882795  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:44.882803  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:44.882808  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:44.886290  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:45.083331  452010 request.go:632] Waited for 196.362296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:08:45.083426  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:08:45.083439  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:45.083448  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:45.083452  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:45.087138  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:45.087948  452010 pod_ready.go:93] pod "kube-apiserver-ha-163902-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:45.087967  452010 pod_ready.go:82] duration metric: took 399.459635ms for pod "kube-apiserver-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:45.087977  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-163902-m03" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:45.283166  452010 request.go:632] Waited for 195.099626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902-m03
	I0819 19:08:45.283231  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-163902-m03
	I0819 19:08:45.283256  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:45.283266  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:45.283271  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:45.287363  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:08:45.482561  452010 request.go:632] Waited for 194.317595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:45.482642  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:45.482649  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:45.482660  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:45.482666  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:45.486243  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:45.486936  452010 pod_ready.go:93] pod "kube-apiserver-ha-163902-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:45.486958  452010 pod_ready.go:82] duration metric: took 398.972984ms for pod "kube-apiserver-ha-163902-m03" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:45.486974  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:45.683031  452010 request.go:632] Waited for 195.974322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902
	I0819 19:08:45.683107  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902
	I0819 19:08:45.683115  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:45.683122  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:45.683126  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:45.686806  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:45.883260  452010 request.go:632] Waited for 195.449245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:45.883331  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:45.883338  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:45.883351  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:45.883361  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:45.886660  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:45.887178  452010 pod_ready.go:93] pod "kube-controller-manager-ha-163902" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:45.887205  452010 pod_ready.go:82] duration metric: took 400.222232ms for pod "kube-controller-manager-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:45.887220  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:46.083315  452010 request.go:632] Waited for 196.007764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902-m02
	I0819 19:08:46.083413  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902-m02
	I0819 19:08:46.083424  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:46.083435  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:46.083441  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:46.086684  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:46.282594  452010 request.go:632] Waited for 195.289033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:08:46.282660  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:08:46.282665  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:46.282675  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:46.282682  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:46.286132  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:46.286819  452010 pod_ready.go:93] pod "kube-controller-manager-ha-163902-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:46.286838  452010 pod_ready.go:82] duration metric: took 399.610376ms for pod "kube-controller-manager-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:46.286849  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-163902-m03" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:46.482894  452010 request.go:632] Waited for 195.946883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902-m03
	I0819 19:08:46.482973  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-163902-m03
	I0819 19:08:46.482979  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:46.482987  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:46.482993  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:46.486332  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:46.683439  452010 request.go:632] Waited for 196.274914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:46.683519  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:46.683527  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:46.683534  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:46.683551  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:46.687351  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:46.687932  452010 pod_ready.go:93] pod "kube-controller-manager-ha-163902-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:46.687955  452010 pod_ready.go:82] duration metric: took 401.098178ms for pod "kube-controller-manager-ha-163902-m03" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:46.687970  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4whvs" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:46.883141  452010 request.go:632] Waited for 195.089336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4whvs
	I0819 19:08:46.883241  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4whvs
	I0819 19:08:46.883252  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:46.883264  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:46.883271  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:46.886467  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:47.083466  452010 request.go:632] Waited for 196.390904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:08:47.083540  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:08:47.083545  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:47.083553  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:47.083557  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:47.086642  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:47.087156  452010 pod_ready.go:93] pod "kube-proxy-4whvs" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:47.087181  452010 pod_ready.go:82] duration metric: took 399.202246ms for pod "kube-proxy-4whvs" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:47.087194  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wxrsv" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:47.283324  452010 request.go:632] Waited for 196.023466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxrsv
	I0819 19:08:47.283401  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxrsv
	I0819 19:08:47.283409  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:47.283420  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:47.283426  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:47.286972  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:47.483030  452010 request.go:632] Waited for 195.414631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:47.483097  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:47.483102  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:47.483110  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:47.483115  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:47.486659  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:47.487234  452010 pod_ready.go:93] pod "kube-proxy-wxrsv" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:47.487258  452010 pod_ready.go:82] duration metric: took 400.05675ms for pod "kube-proxy-wxrsv" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:47.487273  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xq852" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:47.682816  452010 request.go:632] Waited for 195.458544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xq852
	I0819 19:08:47.682896  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xq852
	I0819 19:08:47.682904  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:47.682913  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:47.682920  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:47.686683  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:47.883407  452010 request.go:632] Waited for 196.145004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:47.883489  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:47.883506  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:47.883535  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:47.883543  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:47.886789  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:47.887318  452010 pod_ready.go:93] pod "kube-proxy-xq852" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:47.887338  452010 pod_ready.go:82] duration metric: took 400.057272ms for pod "kube-proxy-xq852" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:47.887351  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:48.082420  452010 request.go:632] Waited for 194.983477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902
	I0819 19:08:48.082496  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902
	I0819 19:08:48.082501  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:48.082508  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:48.082512  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:48.085794  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:48.282892  452010 request.go:632] Waited for 196.439767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:48.282964  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902
	I0819 19:08:48.282980  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:48.282989  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:48.282996  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:48.286338  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:48.287017  452010 pod_ready.go:93] pod "kube-scheduler-ha-163902" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:48.287038  452010 pod_ready.go:82] duration metric: took 399.679568ms for pod "kube-scheduler-ha-163902" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:48.287049  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:48.483208  452010 request.go:632] Waited for 196.075203ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902-m02
	I0819 19:08:48.483326  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902-m02
	I0819 19:08:48.483338  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:48.483348  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:48.483357  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:48.487579  452010 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 19:08:48.682566  452010 request.go:632] Waited for 194.284217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:08:48.682653  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m02
	I0819 19:08:48.682664  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:48.682674  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:48.682681  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:48.686127  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:48.686638  452010 pod_ready.go:93] pod "kube-scheduler-ha-163902-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:48.686667  452010 pod_ready.go:82] duration metric: took 399.610522ms for pod "kube-scheduler-ha-163902-m02" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:48.686682  452010 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-163902-m03" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:48.882743  452010 request.go:632] Waited for 195.96599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902-m03
	I0819 19:08:48.882809  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-163902-m03
	I0819 19:08:48.882816  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:48.882824  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:48.882829  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:48.886603  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:49.082544  452010 request.go:632] Waited for 195.332113ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:49.082624  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-163902-m03
	I0819 19:08:49.082632  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:49.082645  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:49.082655  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:49.086117  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:49.086515  452010 pod_ready.go:93] pod "kube-scheduler-ha-163902-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 19:08:49.086534  452010 pod_ready.go:82] duration metric: took 399.843776ms for pod "kube-scheduler-ha-163902-m03" in "kube-system" namespace to be "Ready" ...
	I0819 19:08:49.086547  452010 pod_ready.go:39] duration metric: took 5.200548968s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:08:49.086566  452010 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:08:49.086627  452010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:08:49.101315  452010 api_server.go:72] duration metric: took 23.043421745s to wait for apiserver process to appear ...
	I0819 19:08:49.101354  452010 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:08:49.101378  452010 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0819 19:08:49.107203  452010 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0819 19:08:49.107304  452010 round_trippers.go:463] GET https://192.168.39.227:8443/version
	I0819 19:08:49.107314  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:49.107325  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:49.107331  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:49.108796  452010 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 19:08:49.108897  452010 api_server.go:141] control plane version: v1.31.0
	I0819 19:08:49.108922  452010 api_server.go:131] duration metric: took 7.558305ms to wait for apiserver health ...
	I0819 19:08:49.108931  452010 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:08:49.283386  452010 request.go:632] Waited for 174.348677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:08:49.283451  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:08:49.283456  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:49.283464  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:49.283469  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:49.290726  452010 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 19:08:49.297300  452010 system_pods.go:59] 24 kube-system pods found
	I0819 19:08:49.297339  452010 system_pods.go:61] "coredns-6f6b679f8f-nkths" [b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2] Running
	I0819 19:08:49.297346  452010 system_pods.go:61] "coredns-6f6b679f8f-wmp8k" [ca2ee9c4-992a-4251-a717-9843b7b41894] Running
	I0819 19:08:49.297351  452010 system_pods.go:61] "etcd-ha-163902" [fc3d22fd-d8b3-45ca-a6f6-9898285f85d4] Running
	I0819 19:08:49.297355  452010 system_pods.go:61] "etcd-ha-163902-m02" [f05c7c5e-de7b-4963-b897-204a95440e0d] Running
	I0819 19:08:49.297360  452010 system_pods.go:61] "etcd-ha-163902-m03" [596e35eb-102b-4a4f-8e3f-807b940a4bc6] Running
	I0819 19:08:49.297364  452010 system_pods.go:61] "kindnet-72q7r" [d376a785-a08b-4d53-bc5e-02425901c947] Running
	I0819 19:08:49.297369  452010 system_pods.go:61] "kindnet-97cnn" [75284cfb-20b5-4675-b53e-db4130cc6722] Running
	I0819 19:08:49.297373  452010 system_pods.go:61] "kindnet-bpwjl" [624275c2-a670-4cc0-a11c-70f3e1b78946] Running
	I0819 19:08:49.297378  452010 system_pods.go:61] "kube-apiserver-ha-163902" [ab456c18-e3cf-462b-8028-55ddf9b36306] Running
	I0819 19:08:49.297383  452010 system_pods.go:61] "kube-apiserver-ha-163902-m02" [f11ec84f-77aa-4003-a786-f6c54b4c14bd] Running
	I0819 19:08:49.297387  452010 system_pods.go:61] "kube-apiserver-ha-163902-m03" [977eaba2-9cd2-42e2-83a4-f973bdebbf2b] Running
	I0819 19:08:49.297392  452010 system_pods.go:61] "kube-controller-manager-ha-163902" [efeafbc7-7719-4632-9215-fd3c3ca09cc5] Running
	I0819 19:08:49.297397  452010 system_pods.go:61] "kube-controller-manager-ha-163902-m02" [2d80759a-8137-4e3c-a621-ba92532c9d9b] Running
	I0819 19:08:49.297405  452010 system_pods.go:61] "kube-controller-manager-ha-163902-m03" [470c09f7-df81-4a14-9cbf-71b73a570c48] Running
	I0819 19:08:49.297410  452010 system_pods.go:61] "kube-proxy-4whvs" [b241f2e5-d83c-432b-bdce-c26940efa096] Running
	I0819 19:08:49.297417  452010 system_pods.go:61] "kube-proxy-wxrsv" [3d78c5e8-eed2-4da5-9425-76f96e2d8ed6] Running
	I0819 19:08:49.297424  452010 system_pods.go:61] "kube-proxy-xq852" [f9dee0f1-ada2-4cb4-8734-c2a3456c6d37] Running
	I0819 19:08:49.297431  452010 system_pods.go:61] "kube-scheduler-ha-163902" [ee9d8a96-56d9-4bc2-9829-bdf4a0ec0749] Running
	I0819 19:08:49.297437  452010 system_pods.go:61] "kube-scheduler-ha-163902-m02" [90a559e2-5c2a-4db8-b9d4-21362177d686] Running
	I0819 19:08:49.297443  452010 system_pods.go:61] "kube-scheduler-ha-163902-m03" [dc50d60c-4da1-4279-bd7a-bf1d9486d7ad] Running
	I0819 19:08:49.297449  452010 system_pods.go:61] "kube-vip-ha-163902" [3143571f-0eb1-4f9a-ada5-27fda4724d18] Running
	I0819 19:08:49.297455  452010 system_pods.go:61] "kube-vip-ha-163902-m02" [ccd47f0d-1f18-4146-808f-5436d25c859f] Running
	I0819 19:08:49.297460  452010 system_pods.go:61] "kube-vip-ha-163902-m03" [6f2b8b81-6d0d-4baa-9818-890c639a811c] Running
	I0819 19:08:49.297466  452010 system_pods.go:61] "storage-provisioner" [05dffa5a-3372-4a79-94ad-33d14a4b7fd0] Running
	I0819 19:08:49.297481  452010 system_pods.go:74] duration metric: took 188.54165ms to wait for pod list to return data ...
	I0819 19:08:49.297494  452010 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:08:49.482952  452010 request.go:632] Waited for 185.365607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/default/serviceaccounts
	I0819 19:08:49.483026  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/default/serviceaccounts
	I0819 19:08:49.483031  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:49.483039  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:49.483045  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:49.486489  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:49.486648  452010 default_sa.go:45] found service account: "default"
	I0819 19:08:49.486674  452010 default_sa.go:55] duration metric: took 189.169834ms for default service account to be created ...
	I0819 19:08:49.486687  452010 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:08:49.683399  452010 request.go:632] Waited for 196.582808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:08:49.683547  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0819 19:08:49.683560  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:49.683570  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:49.683577  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:49.689251  452010 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 19:08:49.695863  452010 system_pods.go:86] 24 kube-system pods found
	I0819 19:08:49.695904  452010 system_pods.go:89] "coredns-6f6b679f8f-nkths" [b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2] Running
	I0819 19:08:49.695913  452010 system_pods.go:89] "coredns-6f6b679f8f-wmp8k" [ca2ee9c4-992a-4251-a717-9843b7b41894] Running
	I0819 19:08:49.695920  452010 system_pods.go:89] "etcd-ha-163902" [fc3d22fd-d8b3-45ca-a6f6-9898285f85d4] Running
	I0819 19:08:49.695926  452010 system_pods.go:89] "etcd-ha-163902-m02" [f05c7c5e-de7b-4963-b897-204a95440e0d] Running
	I0819 19:08:49.695931  452010 system_pods.go:89] "etcd-ha-163902-m03" [596e35eb-102b-4a4f-8e3f-807b940a4bc6] Running
	I0819 19:08:49.695935  452010 system_pods.go:89] "kindnet-72q7r" [d376a785-a08b-4d53-bc5e-02425901c947] Running
	I0819 19:08:49.695940  452010 system_pods.go:89] "kindnet-97cnn" [75284cfb-20b5-4675-b53e-db4130cc6722] Running
	I0819 19:08:49.695946  452010 system_pods.go:89] "kindnet-bpwjl" [624275c2-a670-4cc0-a11c-70f3e1b78946] Running
	I0819 19:08:49.695951  452010 system_pods.go:89] "kube-apiserver-ha-163902" [ab456c18-e3cf-462b-8028-55ddf9b36306] Running
	I0819 19:08:49.695957  452010 system_pods.go:89] "kube-apiserver-ha-163902-m02" [f11ec84f-77aa-4003-a786-f6c54b4c14bd] Running
	I0819 19:08:49.695962  452010 system_pods.go:89] "kube-apiserver-ha-163902-m03" [977eaba2-9cd2-42e2-83a4-f973bdebbf2b] Running
	I0819 19:08:49.695968  452010 system_pods.go:89] "kube-controller-manager-ha-163902" [efeafbc7-7719-4632-9215-fd3c3ca09cc5] Running
	I0819 19:08:49.695976  452010 system_pods.go:89] "kube-controller-manager-ha-163902-m02" [2d80759a-8137-4e3c-a621-ba92532c9d9b] Running
	I0819 19:08:49.695982  452010 system_pods.go:89] "kube-controller-manager-ha-163902-m03" [470c09f7-df81-4a14-9cbf-71b73a570c48] Running
	I0819 19:08:49.695988  452010 system_pods.go:89] "kube-proxy-4whvs" [b241f2e5-d83c-432b-bdce-c26940efa096] Running
	I0819 19:08:49.695993  452010 system_pods.go:89] "kube-proxy-wxrsv" [3d78c5e8-eed2-4da5-9425-76f96e2d8ed6] Running
	I0819 19:08:49.695999  452010 system_pods.go:89] "kube-proxy-xq852" [f9dee0f1-ada2-4cb4-8734-c2a3456c6d37] Running
	I0819 19:08:49.696004  452010 system_pods.go:89] "kube-scheduler-ha-163902" [ee9d8a96-56d9-4bc2-9829-bdf4a0ec0749] Running
	I0819 19:08:49.696012  452010 system_pods.go:89] "kube-scheduler-ha-163902-m02" [90a559e2-5c2a-4db8-b9d4-21362177d686] Running
	I0819 19:08:49.696018  452010 system_pods.go:89] "kube-scheduler-ha-163902-m03" [dc50d60c-4da1-4279-bd7a-bf1d9486d7ad] Running
	I0819 19:08:49.696026  452010 system_pods.go:89] "kube-vip-ha-163902" [3143571f-0eb1-4f9a-ada5-27fda4724d18] Running
	I0819 19:08:49.696033  452010 system_pods.go:89] "kube-vip-ha-163902-m02" [ccd47f0d-1f18-4146-808f-5436d25c859f] Running
	I0819 19:08:49.696037  452010 system_pods.go:89] "kube-vip-ha-163902-m03" [6f2b8b81-6d0d-4baa-9818-890c639a811c] Running
	I0819 19:08:49.696042  452010 system_pods.go:89] "storage-provisioner" [05dffa5a-3372-4a79-94ad-33d14a4b7fd0] Running
	I0819 19:08:49.696056  452010 system_pods.go:126] duration metric: took 209.3598ms to wait for k8s-apps to be running ...
	I0819 19:08:49.696068  452010 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:08:49.696130  452010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:08:49.712004  452010 system_svc.go:56] duration metric: took 15.928391ms WaitForService to wait for kubelet
	I0819 19:08:49.712039  452010 kubeadm.go:582] duration metric: took 23.654153488s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:08:49.712066  452010 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:08:49.882471  452010 request.go:632] Waited for 170.308096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes
	I0819 19:08:49.882533  452010 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes
	I0819 19:08:49.882538  452010 round_trippers.go:469] Request Headers:
	I0819 19:08:49.882546  452010 round_trippers.go:473]     Accept: application/json, */*
	I0819 19:08:49.882551  452010 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 19:08:49.886320  452010 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 19:08:49.887270  452010 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:08:49.887291  452010 node_conditions.go:123] node cpu capacity is 2
	I0819 19:08:49.887316  452010 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:08:49.887321  452010 node_conditions.go:123] node cpu capacity is 2
	I0819 19:08:49.887326  452010 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:08:49.887330  452010 node_conditions.go:123] node cpu capacity is 2
	I0819 19:08:49.887337  452010 node_conditions.go:105] duration metric: took 175.264878ms to run NodePressure ...
	I0819 19:08:49.887355  452010 start.go:241] waiting for startup goroutines ...
	I0819 19:08:49.887386  452010 start.go:255] writing updated cluster config ...
	I0819 19:08:49.887710  452010 ssh_runner.go:195] Run: rm -f paused
	I0819 19:08:49.942523  452010 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:08:49.944601  452010 out.go:177] * Done! kubectl is now configured to use "ha-163902" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.373664888Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094808373643834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d16a7fa-4047-4daf-b821-da924457d424 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.374193091Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8456b368-9a23-488e-832d-6d251d0a451c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.374246514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8456b368-9a23-488e-832d-6d251d0a451c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.374478287Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02444059f768b434bec5a2fd4b0d4fbb732968f540047c4aa7b8a99a1c65bb7d,PodSandboxId:eb7a960ca621f28de1cf7df7ee989ec445634c3424342a6e021968e7cdf42a07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724094532962212349,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:259a75894a0e7bb2cdc9cd2b3363c853666a4c083456ff5d18e290b19f68e62d,PodSandboxId:bdf5c98989b4e84de8bdcad2859c43a2b3b7ceae07202ccadbf4b38936e48ad9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094393408241465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5,PodSandboxId:ccb6b229e5b0f48fa137a962df49ea8e4097412f9c8e6af3d688045b8f853f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094393388137670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816,PodSandboxId:17befe587bdb85618dca3e7976f660e17302ba9f247c3ccf2ac0ac81f4a9b659,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094393363252432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-99
2a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2,PodSandboxId:10a016c587c22e7d86b0df8374fb970bc55d97e30756c8f1221e21e974ef8796,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724094381626725267,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd,PodSandboxId:5f1f61689816115acade7df9764d44551ec4f4c0166b634e72db9ca6205df7f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172409437
8012677148,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f34db6fe664bceaf0e9d708d1611192c9077289166c5e6eb41e36c67d759f40,PodSandboxId:4fda63b31ef3bc66be8d33599e8028219f87aa47cd5edfb426a55be8aa6ac82c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172409436992
7290186,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3706fbd25af952ee911ba6f754e5bd73,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca,PodSandboxId:644e4a4ea97f11366a62fe5805da67ba956c5f041ae16a8cd9bbb066ce4e0622,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094367193456047,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63a9dbc3e9af7e2076b8b04c548b2a1cb5c385838357a95e7bc2a86709324ae5,PodSandboxId:d699d79418f1a46c4a408f0cc05417b89be740947f3265fba2de1f95ca70c025,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094367148223106,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fca5e9aea9309a38211d91fdbe50e67041a24d8dc547a7cff8edefeb4c57ae6,PodSandboxId:0b872309e95e526a6a3816ecc3576206a83295af6490713eb8431995e4de4605,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094367120557759,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732,PodSandboxId:8f73fd805b78d3b87e7ca93e3d523d088e6dd941a647e215524cab0b016ee42e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094367104625636,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8456b368-9a23-488e-832d-6d251d0a451c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.413967497Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ebac3220-3625-45fb-8493-fa7b45edcde6 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.414045632Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ebac3220-3625-45fb-8493-fa7b45edcde6 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.415200760Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09e5f875-e2e7-4ce9-a217-8317de3a1b5e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.416905440Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094808416871668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09e5f875-e2e7-4ce9-a217-8317de3a1b5e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.417628443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5d2effc-ddfd-4e13-ae5f-b43ddf1867a9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.417683166Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5d2effc-ddfd-4e13-ae5f-b43ddf1867a9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.417899333Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02444059f768b434bec5a2fd4b0d4fbb732968f540047c4aa7b8a99a1c65bb7d,PodSandboxId:eb7a960ca621f28de1cf7df7ee989ec445634c3424342a6e021968e7cdf42a07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724094532962212349,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:259a75894a0e7bb2cdc9cd2b3363c853666a4c083456ff5d18e290b19f68e62d,PodSandboxId:bdf5c98989b4e84de8bdcad2859c43a2b3b7ceae07202ccadbf4b38936e48ad9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094393408241465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5,PodSandboxId:ccb6b229e5b0f48fa137a962df49ea8e4097412f9c8e6af3d688045b8f853f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094393388137670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816,PodSandboxId:17befe587bdb85618dca3e7976f660e17302ba9f247c3ccf2ac0ac81f4a9b659,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094393363252432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-99
2a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2,PodSandboxId:10a016c587c22e7d86b0df8374fb970bc55d97e30756c8f1221e21e974ef8796,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724094381626725267,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd,PodSandboxId:5f1f61689816115acade7df9764d44551ec4f4c0166b634e72db9ca6205df7f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172409437
8012677148,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f34db6fe664bceaf0e9d708d1611192c9077289166c5e6eb41e36c67d759f40,PodSandboxId:4fda63b31ef3bc66be8d33599e8028219f87aa47cd5edfb426a55be8aa6ac82c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172409436992
7290186,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3706fbd25af952ee911ba6f754e5bd73,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca,PodSandboxId:644e4a4ea97f11366a62fe5805da67ba956c5f041ae16a8cd9bbb066ce4e0622,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094367193456047,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63a9dbc3e9af7e2076b8b04c548b2a1cb5c385838357a95e7bc2a86709324ae5,PodSandboxId:d699d79418f1a46c4a408f0cc05417b89be740947f3265fba2de1f95ca70c025,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094367148223106,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fca5e9aea9309a38211d91fdbe50e67041a24d8dc547a7cff8edefeb4c57ae6,PodSandboxId:0b872309e95e526a6a3816ecc3576206a83295af6490713eb8431995e4de4605,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094367120557759,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732,PodSandboxId:8f73fd805b78d3b87e7ca93e3d523d088e6dd941a647e215524cab0b016ee42e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094367104625636,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5d2effc-ddfd-4e13-ae5f-b43ddf1867a9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.463962602Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f682ec0-5d8b-4cc4-9329-b4d437e7903e name=/runtime.v1.RuntimeService/Version
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.464039433Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f682ec0-5d8b-4cc4-9329-b4d437e7903e name=/runtime.v1.RuntimeService/Version
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.465240041Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b98ed623-6136-4e9c-ab6b-f70b53255b9f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.465710295Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094808465684648,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b98ed623-6136-4e9c-ab6b-f70b53255b9f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.466138749Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d54d8e32-f757-433c-a295-6bfb85bb993e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.466234934Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d54d8e32-f757-433c-a295-6bfb85bb993e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.466641013Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02444059f768b434bec5a2fd4b0d4fbb732968f540047c4aa7b8a99a1c65bb7d,PodSandboxId:eb7a960ca621f28de1cf7df7ee989ec445634c3424342a6e021968e7cdf42a07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724094532962212349,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:259a75894a0e7bb2cdc9cd2b3363c853666a4c083456ff5d18e290b19f68e62d,PodSandboxId:bdf5c98989b4e84de8bdcad2859c43a2b3b7ceae07202ccadbf4b38936e48ad9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094393408241465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5,PodSandboxId:ccb6b229e5b0f48fa137a962df49ea8e4097412f9c8e6af3d688045b8f853f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094393388137670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816,PodSandboxId:17befe587bdb85618dca3e7976f660e17302ba9f247c3ccf2ac0ac81f4a9b659,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094393363252432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-99
2a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2,PodSandboxId:10a016c587c22e7d86b0df8374fb970bc55d97e30756c8f1221e21e974ef8796,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724094381626725267,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd,PodSandboxId:5f1f61689816115acade7df9764d44551ec4f4c0166b634e72db9ca6205df7f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172409437
8012677148,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f34db6fe664bceaf0e9d708d1611192c9077289166c5e6eb41e36c67d759f40,PodSandboxId:4fda63b31ef3bc66be8d33599e8028219f87aa47cd5edfb426a55be8aa6ac82c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172409436992
7290186,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3706fbd25af952ee911ba6f754e5bd73,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca,PodSandboxId:644e4a4ea97f11366a62fe5805da67ba956c5f041ae16a8cd9bbb066ce4e0622,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094367193456047,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63a9dbc3e9af7e2076b8b04c548b2a1cb5c385838357a95e7bc2a86709324ae5,PodSandboxId:d699d79418f1a46c4a408f0cc05417b89be740947f3265fba2de1f95ca70c025,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094367148223106,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fca5e9aea9309a38211d91fdbe50e67041a24d8dc547a7cff8edefeb4c57ae6,PodSandboxId:0b872309e95e526a6a3816ecc3576206a83295af6490713eb8431995e4de4605,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094367120557759,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732,PodSandboxId:8f73fd805b78d3b87e7ca93e3d523d088e6dd941a647e215524cab0b016ee42e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094367104625636,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d54d8e32-f757-433c-a295-6bfb85bb993e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.507789061Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=52d0e365-d65f-43c1-80a5-34bcd22e21b2 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.507862341Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=52d0e365-d65f-43c1-80a5-34bcd22e21b2 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.509336224Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53e6afa1-19db-4025-97dd-45a63d991a1d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.509774666Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094808509754042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53e6afa1-19db-4025-97dd-45a63d991a1d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.510296009Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c638bcb-92d5-4d91-b3ff-0bb828d7454c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.510417343Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c638bcb-92d5-4d91-b3ff-0bb828d7454c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:13:28 ha-163902 crio[680]: time="2024-08-19 19:13:28.510680463Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02444059f768b434bec5a2fd4b0d4fbb732968f540047c4aa7b8a99a1c65bb7d,PodSandboxId:eb7a960ca621f28de1cf7df7ee989ec445634c3424342a6e021968e7cdf42a07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724094532962212349,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:259a75894a0e7bb2cdc9cd2b3363c853666a4c083456ff5d18e290b19f68e62d,PodSandboxId:bdf5c98989b4e84de8bdcad2859c43a2b3b7ceae07202ccadbf4b38936e48ad9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094393408241465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5,PodSandboxId:ccb6b229e5b0f48fa137a962df49ea8e4097412f9c8e6af3d688045b8f853f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094393388137670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816,PodSandboxId:17befe587bdb85618dca3e7976f660e17302ba9f247c3ccf2ac0ac81f4a9b659,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094393363252432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-99
2a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2,PodSandboxId:10a016c587c22e7d86b0df8374fb970bc55d97e30756c8f1221e21e974ef8796,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724094381626725267,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd,PodSandboxId:5f1f61689816115acade7df9764d44551ec4f4c0166b634e72db9ca6205df7f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172409437
8012677148,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f34db6fe664bceaf0e9d708d1611192c9077289166c5e6eb41e36c67d759f40,PodSandboxId:4fda63b31ef3bc66be8d33599e8028219f87aa47cd5edfb426a55be8aa6ac82c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172409436992
7290186,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3706fbd25af952ee911ba6f754e5bd73,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca,PodSandboxId:644e4a4ea97f11366a62fe5805da67ba956c5f041ae16a8cd9bbb066ce4e0622,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094367193456047,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63a9dbc3e9af7e2076b8b04c548b2a1cb5c385838357a95e7bc2a86709324ae5,PodSandboxId:d699d79418f1a46c4a408f0cc05417b89be740947f3265fba2de1f95ca70c025,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094367148223106,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fca5e9aea9309a38211d91fdbe50e67041a24d8dc547a7cff8edefeb4c57ae6,PodSandboxId:0b872309e95e526a6a3816ecc3576206a83295af6490713eb8431995e4de4605,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094367120557759,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732,PodSandboxId:8f73fd805b78d3b87e7ca93e3d523d088e6dd941a647e215524cab0b016ee42e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094367104625636,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c638bcb-92d5-4d91-b3ff-0bb828d7454c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	02444059f768b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   eb7a960ca621f       busybox-7dff88458-vlrsr
	259a75894a0e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   bdf5c98989b4e       storage-provisioner
	920809b3fb8b5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   ccb6b229e5b0f       coredns-6f6b679f8f-nkths
	e3292ee2a24df       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   17befe587bdb8       coredns-6f6b679f8f-wmp8k
	2bde6d659e1cd       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    7 minutes ago       Running             kindnet-cni               0                   10a016c587c22       kindnet-bpwjl
	db4dd64341a0f       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      7 minutes ago       Running             kube-proxy                0                   5f1f616898161       kube-proxy-wxrsv
	4f34db6fe664b       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   4fda63b31ef3b       kube-vip-ha-163902
	4b31ffd467824       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   644e4a4ea97f1       kube-scheduler-ha-163902
	63a9dbc3e9af7       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   d699d79418f1a       kube-controller-manager-ha-163902
	8fca5e9aea930       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   0b872309e95e5       kube-apiserver-ha-163902
	d7785bd28970f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   8f73fd805b78d       etcd-ha-163902
	
	
	==> coredns [920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5] <==
	[INFO] 10.244.2.2:39433 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000348752s
	[INFO] 10.244.2.2:47564 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018325s
	[INFO] 10.244.2.2:49967 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003381326s
	[INFO] 10.244.2.2:33626 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000258809s
	[INFO] 10.244.1.2:51524 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001508794s
	[INFO] 10.244.1.2:44203 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105366s
	[INFO] 10.244.1.2:39145 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000196935s
	[INFO] 10.244.1.2:53804 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000174817s
	[INFO] 10.244.0.4:38242 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152582s
	[INFO] 10.244.0.4:50866 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00178155s
	[INFO] 10.244.0.4:41459 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077648s
	[INFO] 10.244.0.4:52991 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001294022s
	[INFO] 10.244.0.4:49760 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077772s
	[INFO] 10.244.2.2:52036 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000184006s
	[INFO] 10.244.2.2:42639 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000139597s
	[INFO] 10.244.1.2:45707 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157857s
	[INFO] 10.244.1.2:55541 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079589s
	[INFO] 10.244.0.4:39107 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114365s
	[INFO] 10.244.0.4:42814 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075113s
	[INFO] 10.244.1.2:45907 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164052s
	[INFO] 10.244.1.2:50977 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168617s
	[INFO] 10.244.1.2:55449 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000213337s
	[INFO] 10.244.1.2:36556 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110937s
	[INFO] 10.244.0.4:58486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000301321s
	[INFO] 10.244.0.4:59114 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075318s
	
	
	==> coredns [e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816] <==
	[INFO] 10.244.1.2:49834 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.00173877s
	[INFO] 10.244.1.2:53299 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001633058s
	[INFO] 10.244.0.4:37265 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248064s
	[INFO] 10.244.2.2:37997 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.018569099s
	[INFO] 10.244.2.2:39006 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148509s
	[INFO] 10.244.2.2:49793 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131124s
	[INFO] 10.244.1.2:35247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129697s
	[INFO] 10.244.1.2:51995 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004964244s
	[INFO] 10.244.1.2:49029 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139842s
	[INFO] 10.244.1.2:37017 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012537s
	[INFO] 10.244.0.4:60699 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057628s
	[INFO] 10.244.0.4:57923 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000112473s
	[INFO] 10.244.0.4:51503 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082503s
	[INFO] 10.244.2.2:34426 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121035s
	[INFO] 10.244.2.2:59490 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095139s
	[INFO] 10.244.1.2:50323 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124167s
	[INFO] 10.244.1.2:46467 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102348s
	[INFO] 10.244.0.4:41765 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001163s
	[INFO] 10.244.0.4:34540 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057148s
	[INFO] 10.244.2.2:54418 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140774s
	[INFO] 10.244.2.2:59184 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158893s
	[INFO] 10.244.2.2:53883 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149814s
	[INFO] 10.244.2.2:35674 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136715s
	[INFO] 10.244.0.4:42875 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138512s
	[INFO] 10.244.0.4:58237 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102142s
	
	
	==> describe nodes <==
	Name:               ha-163902
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-163902
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=ha-163902
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T19_06_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:06:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-163902
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:13:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:09:17 +0000   Mon, 19 Aug 2024 19:06:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:09:17 +0000   Mon, 19 Aug 2024 19:06:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:09:17 +0000   Mon, 19 Aug 2024 19:06:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:09:17 +0000   Mon, 19 Aug 2024 19:06:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-163902
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d3b52f7c3a144ec8d3a6e98276775f3
	  System UUID:                4d3b52f7-c3a1-44ec-8d3a-6e98276775f3
	  Boot ID:                    26bff1c8-7a07-4ad4-9634-fcbc547b5a26
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vlrsr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 coredns-6f6b679f8f-nkths             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m11s
	  kube-system                 coredns-6f6b679f8f-wmp8k             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m11s
	  kube-system                 etcd-ha-163902                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m17s
	  kube-system                 kindnet-bpwjl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m11s
	  kube-system                 kube-apiserver-ha-163902             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 kube-controller-manager-ha-163902    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 kube-proxy-wxrsv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 kube-scheduler-ha-163902             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 kube-vip-ha-163902                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m18s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m10s  kube-proxy       
	  Normal  Starting                 7m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m15s  kubelet          Node ha-163902 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m15s  kubelet          Node ha-163902 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m15s  kubelet          Node ha-163902 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m12s  node-controller  Node ha-163902 event: Registered Node ha-163902 in Controller
	  Normal  NodeReady                6m56s  kubelet          Node ha-163902 status is now: NodeReady
	  Normal  RegisteredNode           6m10s  node-controller  Node ha-163902 event: Registered Node ha-163902 in Controller
	  Normal  RegisteredNode           4m57s  node-controller  Node ha-163902 event: Registered Node ha-163902 in Controller
	
	
	Name:               ha-163902-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-163902-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=ha-163902
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T19_07_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:07:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-163902-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:10:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 19:09:13 +0000   Mon, 19 Aug 2024 19:10:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 19:09:13 +0000   Mon, 19 Aug 2024 19:10:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 19:09:13 +0000   Mon, 19 Aug 2024 19:10:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 19:09:13 +0000   Mon, 19 Aug 2024 19:10:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.162
	  Hostname:    ha-163902-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4ebc4d6f40f47d9854129310dcf34d7
	  System UUID:                d4ebc4d6-f40f-47d9-8541-29310dcf34d7
	  Boot ID:                    716d5440-ffff-4957-be6f-50e03e7b2422
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9zj57                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 etcd-ha-163902-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m16s
	  kube-system                 kindnet-97cnn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m18s
	  kube-system                 kube-apiserver-ha-163902-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-controller-manager-ha-163902-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-proxy-4whvs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-scheduler-ha-163902-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-vip-ha-163902-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m18s (x8 over 6m18s)  kubelet          Node ha-163902-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m18s (x8 over 6m18s)  kubelet          Node ha-163902-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m18s (x7 over 6m18s)  kubelet          Node ha-163902-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m17s                  node-controller  Node ha-163902-m02 event: Registered Node ha-163902-m02 in Controller
	  Normal  RegisteredNode           6m10s                  node-controller  Node ha-163902-m02 event: Registered Node ha-163902-m02 in Controller
	  Normal  RegisteredNode           4m57s                  node-controller  Node ha-163902-m02 event: Registered Node ha-163902-m02 in Controller
	  Normal  NodeNotReady             2m42s                  node-controller  Node ha-163902-m02 status is now: NodeNotReady
	
	
	Name:               ha-163902-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-163902-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=ha-163902
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T19_08_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:08:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-163902-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:13:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:09:23 +0000   Mon, 19 Aug 2024 19:08:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:09:23 +0000   Mon, 19 Aug 2024 19:08:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:09:23 +0000   Mon, 19 Aug 2024 19:08:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:09:23 +0000   Mon, 19 Aug 2024 19:08:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.59
	  Hostname:    ha-163902-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4a4497fbf7a43159de7a77620b40e05
	  System UUID:                c4a4497f-bf7a-4315-9de7-a77620b40e05
	  Boot ID:                    9c2df4b5-f5ca-406f-a65a-a8ee6263b172
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4hqxq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 etcd-ha-163902-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m4s
	  kube-system                 kindnet-72q7r                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m6s
	  kube-system                 kube-apiserver-ha-163902-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-controller-manager-ha-163902-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-proxy-xq852                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-scheduler-ha-163902-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-vip-ha-163902-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m6s (x8 over 5m6s)  kubelet          Node ha-163902-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m6s (x8 over 5m6s)  kubelet          Node ha-163902-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m6s (x7 over 5m6s)  kubelet          Node ha-163902-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m5s                 node-controller  Node ha-163902-m03 event: Registered Node ha-163902-m03 in Controller
	  Normal  RegisteredNode           5m2s                 node-controller  Node ha-163902-m03 event: Registered Node ha-163902-m03 in Controller
	  Normal  RegisteredNode           4m57s                node-controller  Node ha-163902-m03 event: Registered Node ha-163902-m03 in Controller
	
	
	Name:               ha-163902-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-163902-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=ha-163902
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T19_09_27_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:09:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-163902-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:13:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:09:57 +0000   Mon, 19 Aug 2024 19:09:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:09:57 +0000   Mon, 19 Aug 2024 19:09:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:09:57 +0000   Mon, 19 Aug 2024 19:09:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:09:57 +0000   Mon, 19 Aug 2024 19:09:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.130
	  Hostname:    ha-163902-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d771c9152e0748dca0ecbcee5197aaea
	  System UUID:                d771c915-2e07-48dc-a0ec-bcee5197aaea
	  Boot ID:                    2128ff13-241c-4434-b6fe-09c16a15357c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-plbmk       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m2s
	  kube-system                 kube-proxy-9b77p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m57s                kube-proxy       
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-163902-m04 event: Registered Node ha-163902-m04 in Controller
	  Normal  NodeHasSufficientMemory  4m2s (x2 over 4m2s)  kubelet          Node ha-163902-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x2 over 4m2s)  kubelet          Node ha-163902-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x2 over 4m2s)  kubelet          Node ha-163902-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                   node-controller  Node ha-163902-m04 event: Registered Node ha-163902-m04 in Controller
	  Normal  RegisteredNode           3m57s                node-controller  Node ha-163902-m04 event: Registered Node ha-163902-m04 in Controller
	  Normal  NodeReady                3m42s                kubelet          Node ha-163902-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug19 19:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047802] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.035962] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.765720] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.945917] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.569397] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.607321] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.061246] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063525] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.198071] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.117006] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.272735] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[Aug19 19:06] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +3.660968] systemd-fstab-generator[894]: Ignoring "noauto" option for root device
	[  +0.062148] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.174652] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.082985] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.372328] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.696903] kauditd_printk_skb: 23 callbacks suppressed
	[Aug19 19:07] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732] <==
	{"level":"warn","ts":"2024-08-19T19:13:28.683794Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.770744Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.775528Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.781724Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.786993Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.794884Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.801778Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.806963Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.810647Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.819190Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.824977Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.831347Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.835948Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.837411Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.842340Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.849559Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.856301Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.863878Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.869011Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.872491Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.881729Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.881911Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.888757Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.895521Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:13:28.904925Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:13:28 up 7 min,  0 users,  load average: 0.07, 0.15, 0.09
	Linux ha-163902 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2] <==
	I0819 19:12:52.634997       1 main.go:299] handling current node
	I0819 19:13:02.633353       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0819 19:13:02.633384       1 main.go:299] handling current node
	I0819 19:13:02.633399       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0819 19:13:02.633403       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	I0819 19:13:02.633549       1 main.go:295] Handling node with IPs: map[192.168.39.59:{}]
	I0819 19:13:02.633570       1 main.go:322] Node ha-163902-m03 has CIDR [10.244.2.0/24] 
	I0819 19:13:02.633655       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0819 19:13:02.633676       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	I0819 19:13:12.629990       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0819 19:13:12.630097       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	I0819 19:13:12.630303       1 main.go:295] Handling node with IPs: map[192.168.39.59:{}]
	I0819 19:13:12.630334       1 main.go:322] Node ha-163902-m03 has CIDR [10.244.2.0/24] 
	I0819 19:13:12.630400       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0819 19:13:12.630418       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	I0819 19:13:12.630472       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0819 19:13:12.630490       1 main.go:299] handling current node
	I0819 19:13:22.624581       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0819 19:13:22.624633       1 main.go:299] handling current node
	I0819 19:13:22.624672       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0819 19:13:22.624679       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	I0819 19:13:22.624838       1 main.go:295] Handling node with IPs: map[192.168.39.59:{}]
	I0819 19:13:22.624856       1 main.go:322] Node ha-163902-m03 has CIDR [10.244.2.0/24] 
	I0819 19:13:22.624910       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0819 19:13:22.624925       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8fca5e9aea9309a38211d91fdbe50e67041a24d8dc547a7cff8edefeb4c57ae6] <==
	I0819 19:06:11.722524       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0819 19:06:11.729537       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.227]
	I0819 19:06:11.731100       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 19:06:11.740780       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 19:06:11.907361       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 19:06:13.440920       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 19:06:13.469083       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0819 19:06:13.484596       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 19:06:17.359540       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0819 19:06:17.507673       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0819 19:08:54.576415       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46604: use of closed network connection
	E0819 19:08:54.772351       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46608: use of closed network connection
	E0819 19:08:54.961822       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46630: use of closed network connection
	E0819 19:08:55.182737       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46642: use of closed network connection
	E0819 19:08:55.374304       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46666: use of closed network connection
	E0819 19:08:55.562243       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46684: use of closed network connection
	E0819 19:08:55.753509       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46698: use of closed network connection
	E0819 19:08:55.937825       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46722: use of closed network connection
	E0819 19:08:56.128562       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46734: use of closed network connection
	E0819 19:08:56.430909       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46756: use of closed network connection
	E0819 19:08:56.599912       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46778: use of closed network connection
	E0819 19:08:56.783300       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46798: use of closed network connection
	E0819 19:08:56.967242       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46818: use of closed network connection
	E0819 19:08:57.166319       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46834: use of closed network connection
	E0819 19:08:57.342330       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46850: use of closed network connection
	
	
	==> kube-controller-manager [63a9dbc3e9af7e2076b8b04c548b2a1cb5c385838357a95e7bc2a86709324ae5] <==
	I0819 19:09:26.888205       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-163902-m04" podCIDRs=["10.244.3.0/24"]
	I0819 19:09:26.888257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:26.888290       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:26.897777       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:26.955550       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-163902-m04"
	I0819 19:09:27.071653       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:27.261410       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:27.495123       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:28.729442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:28.786826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:31.078719       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:31.104740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:37.267662       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:46.368127       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:46.368769       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-163902-m04"
	I0819 19:09:46.380772       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:46.972463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:09:57.505219       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:10:46.108790       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-163902-m04"
	I0819 19:10:46.108888       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m02"
	I0819 19:10:46.128703       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m02"
	I0819 19:10:46.204463       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.13391ms"
	I0819 19:10:46.205139       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.506µs"
	I0819 19:10:47.082631       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m02"
	I0819 19:10:51.319552       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m02"
	
	
	==> kube-proxy [db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 19:06:18.416775       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 19:06:18.428570       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.227"]
	E0819 19:06:18.429535       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 19:06:18.540472       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 19:06:18.540525       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 19:06:18.540550       1 server_linux.go:169] "Using iptables Proxier"
	I0819 19:06:18.546347       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 19:06:18.546579       1 server.go:483] "Version info" version="v1.31.0"
	I0819 19:06:18.546589       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:06:18.547809       1 config.go:197] "Starting service config controller"
	I0819 19:06:18.547832       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 19:06:18.547851       1 config.go:104] "Starting endpoint slice config controller"
	I0819 19:06:18.547854       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 19:06:18.549843       1 config.go:326] "Starting node config controller"
	I0819 19:06:18.549853       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 19:06:18.648166       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 19:06:18.648223       1 shared_informer.go:320] Caches are synced for service config
	I0819 19:06:18.650074       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca] <==
	W0819 19:06:10.791025       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 19:06:10.791133       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 19:06:10.813346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 19:06:10.813395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:10.852062       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 19:06:10.852116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:10.897777       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 19:06:10.897836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:10.947072       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 19:06:10.947211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:10.959849       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 19:06:10.960702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:11.033567       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 19:06:11.033616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:11.138095       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 19:06:11.138245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:11.154082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 19:06:11.154232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:11.189865       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 19:06:11.189919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:11.215289       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 19:06:11.215345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0819 19:06:12.630110       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 19:08:50.829572       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9zj57\": pod busybox-7dff88458-9zj57 is already assigned to node \"ha-163902-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-9zj57" node="ha-163902-m03"
	E0819 19:08:50.829705       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9zj57\": pod busybox-7dff88458-9zj57 is already assigned to node \"ha-163902-m02\"" pod="default/busybox-7dff88458-9zj57"
	
	
	==> kubelet <==
	Aug 19 19:12:13 ha-163902 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 19:12:13 ha-163902 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 19:12:13 ha-163902 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 19:12:13 ha-163902 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:12:13 ha-163902 kubelet[1316]: E0819 19:12:13.529405    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094733528892278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:12:13 ha-163902 kubelet[1316]: E0819 19:12:13.529467    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094733528892278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:12:23 ha-163902 kubelet[1316]: E0819 19:12:23.531547    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094743531190273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:12:23 ha-163902 kubelet[1316]: E0819 19:12:23.531996    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094743531190273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:12:33 ha-163902 kubelet[1316]: E0819 19:12:33.534130    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094753533759371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:12:33 ha-163902 kubelet[1316]: E0819 19:12:33.534196    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094753533759371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:12:43 ha-163902 kubelet[1316]: E0819 19:12:43.536101    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094763535708252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:12:43 ha-163902 kubelet[1316]: E0819 19:12:43.536173    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094763535708252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:12:53 ha-163902 kubelet[1316]: E0819 19:12:53.538850    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094773538412133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:12:53 ha-163902 kubelet[1316]: E0819 19:12:53.538890    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094773538412133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:13:03 ha-163902 kubelet[1316]: E0819 19:13:03.540200    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094783539852358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:13:03 ha-163902 kubelet[1316]: E0819 19:13:03.540460    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094783539852358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:13:13 ha-163902 kubelet[1316]: E0819 19:13:13.378128    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 19:13:13 ha-163902 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 19:13:13 ha-163902 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 19:13:13 ha-163902 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 19:13:13 ha-163902 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:13:13 ha-163902 kubelet[1316]: E0819 19:13:13.543421    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094793542814888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:13:13 ha-163902 kubelet[1316]: E0819 19:13:13.543494    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094793542814888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:13:23 ha-163902 kubelet[1316]: E0819 19:13:23.545902    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094803545541735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:13:23 ha-163902 kubelet[1316]: E0819 19:13:23.546227    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094803545541735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-163902 -n ha-163902
helpers_test.go:261: (dbg) Run:  kubectl --context ha-163902 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (57.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (407.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-163902 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-163902 -v=7 --alsologtostderr
E0819 19:14:38.961007  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:14:56.397849  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:15:06.664226  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-163902 -v=7 --alsologtostderr: exit status 82 (2m1.873772602s)

                                                
                                                
-- stdout --
	* Stopping node "ha-163902-m04"  ...
	* Stopping node "ha-163902-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:13:30.423288  457837 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:13:30.423618  457837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:13:30.423635  457837 out.go:358] Setting ErrFile to fd 2...
	I0819 19:13:30.423642  457837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:13:30.423871  457837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:13:30.424125  457837 out.go:352] Setting JSON to false
	I0819 19:13:30.424212  457837 mustload.go:65] Loading cluster: ha-163902
	I0819 19:13:30.424680  457837 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:13:30.424788  457837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:13:30.424977  457837 mustload.go:65] Loading cluster: ha-163902
	I0819 19:13:30.425107  457837 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:13:30.425168  457837 stop.go:39] StopHost: ha-163902-m04
	I0819 19:13:30.425556  457837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:30.425600  457837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:30.442015  457837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41363
	I0819 19:13:30.442522  457837 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:30.443221  457837 main.go:141] libmachine: Using API Version  1
	I0819 19:13:30.443250  457837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:30.443599  457837 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:30.446210  457837 out.go:177] * Stopping node "ha-163902-m04"  ...
	I0819 19:13:30.447657  457837 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 19:13:30.447689  457837 main.go:141] libmachine: (ha-163902-m04) Calling .DriverName
	I0819 19:13:30.447957  457837 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 19:13:30.447988  457837 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHHostname
	I0819 19:13:30.451030  457837 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:13:30.451528  457837 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:09:12 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:13:30.451568  457837 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:13:30.451825  457837 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHPort
	I0819 19:13:30.452056  457837 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHKeyPath
	I0819 19:13:30.452262  457837 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHUsername
	I0819 19:13:30.452421  457837 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m04/id_rsa Username:docker}
	I0819 19:13:30.539755  457837 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 19:13:30.592706  457837 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 19:13:30.646057  457837 main.go:141] libmachine: Stopping "ha-163902-m04"...
	I0819 19:13:30.646091  457837 main.go:141] libmachine: (ha-163902-m04) Calling .GetState
	I0819 19:13:30.647766  457837 main.go:141] libmachine: (ha-163902-m04) Calling .Stop
	I0819 19:13:30.651800  457837 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 0/120
	I0819 19:13:31.814705  457837 main.go:141] libmachine: (ha-163902-m04) Calling .GetState
	I0819 19:13:31.815980  457837 main.go:141] libmachine: Machine "ha-163902-m04" was stopped.
	I0819 19:13:31.816001  457837 stop.go:75] duration metric: took 1.368346962s to stop
	I0819 19:13:31.816045  457837 stop.go:39] StopHost: ha-163902-m03
	I0819 19:13:31.816449  457837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:31.816497  457837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:31.832394  457837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42927
	I0819 19:13:31.832895  457837 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:31.833455  457837 main.go:141] libmachine: Using API Version  1
	I0819 19:13:31.833480  457837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:31.833836  457837 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:31.836908  457837 out.go:177] * Stopping node "ha-163902-m03"  ...
	I0819 19:13:31.838172  457837 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 19:13:31.838216  457837 main.go:141] libmachine: (ha-163902-m03) Calling .DriverName
	I0819 19:13:31.838573  457837 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 19:13:31.838602  457837 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHHostname
	I0819 19:13:31.841930  457837 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:13:31.842393  457837 main.go:141] libmachine: (ha-163902-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:28", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:07:50 +0000 UTC Type:0 Mac:52:54:00:64:e1:28 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-163902-m03 Clientid:01:52:54:00:64:e1:28}
	I0819 19:13:31.842421  457837 main.go:141] libmachine: (ha-163902-m03) DBG | domain ha-163902-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:64:e1:28 in network mk-ha-163902
	I0819 19:13:31.842597  457837 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHPort
	I0819 19:13:31.842815  457837 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHKeyPath
	I0819 19:13:31.843004  457837 main.go:141] libmachine: (ha-163902-m03) Calling .GetSSHUsername
	I0819 19:13:31.843159  457837 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m03/id_rsa Username:docker}
	I0819 19:13:31.924375  457837 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 19:13:31.978122  457837 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 19:13:32.032088  457837 main.go:141] libmachine: Stopping "ha-163902-m03"...
	I0819 19:13:32.032121  457837 main.go:141] libmachine: (ha-163902-m03) Calling .GetState
	I0819 19:13:32.033852  457837 main.go:141] libmachine: (ha-163902-m03) Calling .Stop
	I0819 19:13:32.038038  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 0/120
	I0819 19:13:33.039572  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 1/120
	I0819 19:13:34.040928  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 2/120
	I0819 19:13:35.042467  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 3/120
	I0819 19:13:36.044219  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 4/120
	I0819 19:13:37.046625  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 5/120
	I0819 19:13:38.048530  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 6/120
	I0819 19:13:39.050003  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 7/120
	I0819 19:13:40.052113  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 8/120
	I0819 19:13:41.053653  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 9/120
	I0819 19:13:42.055813  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 10/120
	I0819 19:13:43.057564  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 11/120
	I0819 19:13:44.059081  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 12/120
	I0819 19:13:45.060568  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 13/120
	I0819 19:13:46.062133  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 14/120
	I0819 19:13:47.064148  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 15/120
	I0819 19:13:48.065879  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 16/120
	I0819 19:13:49.067588  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 17/120
	I0819 19:13:50.069182  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 18/120
	I0819 19:13:51.070796  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 19/120
	I0819 19:13:52.072892  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 20/120
	I0819 19:13:53.074540  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 21/120
	I0819 19:13:54.076174  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 22/120
	I0819 19:13:55.077814  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 23/120
	I0819 19:13:56.079285  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 24/120
	I0819 19:13:57.081394  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 25/120
	I0819 19:13:58.083999  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 26/120
	I0819 19:13:59.085461  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 27/120
	I0819 19:14:00.086976  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 28/120
	I0819 19:14:01.088454  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 29/120
	I0819 19:14:02.090429  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 30/120
	I0819 19:14:03.091782  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 31/120
	I0819 19:14:04.093707  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 32/120
	I0819 19:14:05.095262  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 33/120
	I0819 19:14:06.096939  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 34/120
	I0819 19:14:07.098886  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 35/120
	I0819 19:14:08.100364  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 36/120
	I0819 19:14:09.101770  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 37/120
	I0819 19:14:10.103188  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 38/120
	I0819 19:14:11.104709  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 39/120
	I0819 19:14:12.106516  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 40/120
	I0819 19:14:13.107986  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 41/120
	I0819 19:14:14.109810  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 42/120
	I0819 19:14:15.111854  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 43/120
	I0819 19:14:16.113214  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 44/120
	I0819 19:14:17.114694  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 45/120
	I0819 19:14:18.116036  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 46/120
	I0819 19:14:19.117537  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 47/120
	I0819 19:14:20.119857  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 48/120
	I0819 19:14:21.121270  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 49/120
	I0819 19:14:22.123360  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 50/120
	I0819 19:14:23.124891  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 51/120
	I0819 19:14:24.126625  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 52/120
	I0819 19:14:25.128104  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 53/120
	I0819 19:14:26.129714  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 54/120
	I0819 19:14:27.131739  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 55/120
	I0819 19:14:28.133400  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 56/120
	I0819 19:14:29.135835  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 57/120
	I0819 19:14:30.137536  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 58/120
	I0819 19:14:31.139715  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 59/120
	I0819 19:14:32.141867  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 60/120
	I0819 19:14:33.143429  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 61/120
	I0819 19:14:34.144767  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 62/120
	I0819 19:14:35.146375  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 63/120
	I0819 19:14:36.147703  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 64/120
	I0819 19:14:37.149327  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 65/120
	I0819 19:14:38.150793  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 66/120
	I0819 19:14:39.152581  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 67/120
	I0819 19:14:40.154132  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 68/120
	I0819 19:14:41.155458  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 69/120
	I0819 19:14:42.157600  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 70/120
	I0819 19:14:43.159025  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 71/120
	I0819 19:14:44.160735  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 72/120
	I0819 19:14:45.162323  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 73/120
	I0819 19:14:46.163804  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 74/120
	I0819 19:14:47.166039  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 75/120
	I0819 19:14:48.167782  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 76/120
	I0819 19:14:49.169468  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 77/120
	I0819 19:14:50.171267  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 78/120
	I0819 19:14:51.172578  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 79/120
	I0819 19:14:52.174519  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 80/120
	I0819 19:14:53.176087  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 81/120
	I0819 19:14:54.177514  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 82/120
	I0819 19:14:55.178947  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 83/120
	I0819 19:14:56.180370  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 84/120
	I0819 19:14:57.182158  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 85/120
	I0819 19:14:58.183789  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 86/120
	I0819 19:14:59.185223  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 87/120
	I0819 19:15:00.186793  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 88/120
	I0819 19:15:01.188145  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 89/120
	I0819 19:15:02.190394  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 90/120
	I0819 19:15:03.192055  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 91/120
	I0819 19:15:04.193711  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 92/120
	I0819 19:15:05.195041  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 93/120
	I0819 19:15:06.196306  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 94/120
	I0819 19:15:07.198390  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 95/120
	I0819 19:15:08.199730  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 96/120
	I0819 19:15:09.201231  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 97/120
	I0819 19:15:10.202709  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 98/120
	I0819 19:15:11.204069  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 99/120
	I0819 19:15:12.206286  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 100/120
	I0819 19:15:13.208088  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 101/120
	I0819 19:15:14.210426  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 102/120
	I0819 19:15:15.212044  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 103/120
	I0819 19:15:16.213587  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 104/120
	I0819 19:15:17.215567  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 105/120
	I0819 19:15:18.217455  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 106/120
	I0819 19:15:19.219067  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 107/120
	I0819 19:15:20.220812  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 108/120
	I0819 19:15:21.222591  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 109/120
	I0819 19:15:22.224791  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 110/120
	I0819 19:15:23.226454  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 111/120
	I0819 19:15:24.228036  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 112/120
	I0819 19:15:25.229681  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 113/120
	I0819 19:15:26.232044  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 114/120
	I0819 19:15:27.234150  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 115/120
	I0819 19:15:28.235572  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 116/120
	I0819 19:15:29.237424  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 117/120
	I0819 19:15:30.239228  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 118/120
	I0819 19:15:31.241153  457837 main.go:141] libmachine: (ha-163902-m03) Waiting for machine to stop 119/120
	I0819 19:15:32.242483  457837 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 19:15:32.242571  457837 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0819 19:15:32.244598  457837 out.go:201] 
	W0819 19:15:32.245977  457837 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0819 19:15:32.246001  457837 out.go:270] * 
	* 
	W0819 19:15:32.249076  457837 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 19:15:32.250630  457837 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-163902 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-163902 --wait=true -v=7 --alsologtostderr
E0819 19:19:38.961710  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:19:56.397343  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-163902 --wait=true -v=7 --alsologtostderr: (4m42.824204124s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-163902
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-163902 -n ha-163902
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-163902 logs -n 25: (1.791122657s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-163902 cp ha-163902-m03:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m02:/home/docker/cp-test_ha-163902-m03_ha-163902-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902-m02 sudo cat                                          | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m03_ha-163902-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m03:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04:/home/docker/cp-test_ha-163902-m03_ha-163902-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902-m04 sudo cat                                          | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m03_ha-163902-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-163902 cp testdata/cp-test.txt                                                | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4137015802/001/cp-test_ha-163902-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902:/home/docker/cp-test_ha-163902-m04_ha-163902.txt                       |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902 sudo cat                                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m04_ha-163902.txt                                 |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m02:/home/docker/cp-test_ha-163902-m04_ha-163902-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902-m02 sudo cat                                          | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m04_ha-163902-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m03:/home/docker/cp-test_ha-163902-m04_ha-163902-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902-m03 sudo cat                                          | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m04_ha-163902-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-163902 node stop m02 -v=7                                                     | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-163902 node start m02 -v=7                                                    | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-163902 -v=7                                                           | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-163902 -v=7                                                                | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-163902 --wait=true -v=7                                                    | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:15 UTC | 19 Aug 24 19:20 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-163902                                                                | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:20 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:15:32
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:15:32.304805  458310 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:15:32.305062  458310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:15:32.305072  458310 out.go:358] Setting ErrFile to fd 2...
	I0819 19:15:32.305076  458310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:15:32.305305  458310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:15:32.305908  458310 out.go:352] Setting JSON to false
	I0819 19:15:32.306988  458310 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10683,"bootTime":1724084249,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:15:32.307061  458310 start.go:139] virtualization: kvm guest
	I0819 19:15:32.309407  458310 out.go:177] * [ha-163902] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:15:32.310880  458310 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 19:15:32.310891  458310 notify.go:220] Checking for updates...
	I0819 19:15:32.313865  458310 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:15:32.315049  458310 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:15:32.316135  458310 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:15:32.317260  458310 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:15:32.318531  458310 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:15:32.320243  458310 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:15:32.320365  458310 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 19:15:32.320851  458310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:15:32.320904  458310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:15:32.336532  458310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
	I0819 19:15:32.337046  458310 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:15:32.337740  458310 main.go:141] libmachine: Using API Version  1
	I0819 19:15:32.337772  458310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:15:32.338221  458310 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:15:32.338460  458310 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:15:32.378793  458310 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 19:15:32.380027  458310 start.go:297] selected driver: kvm2
	I0819 19:15:32.380058  458310 start.go:901] validating driver "kvm2" against &{Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.59 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:15:32.380244  458310 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:15:32.380660  458310 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:15:32.380759  458310 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:15:32.398071  458310 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:15:32.398969  458310 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:15:32.399026  458310 cni.go:84] Creating CNI manager for ""
	I0819 19:15:32.399032  458310 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 19:15:32.399103  458310 start.go:340] cluster config:
	{Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.59 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:15:32.399282  458310 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:15:32.402103  458310 out.go:177] * Starting "ha-163902" primary control-plane node in "ha-163902" cluster
	I0819 19:15:32.403305  458310 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:15:32.403368  458310 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:15:32.403387  458310 cache.go:56] Caching tarball of preloaded images
	I0819 19:15:32.403510  458310 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:15:32.403534  458310 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 19:15:32.403668  458310 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:15:32.403918  458310 start.go:360] acquireMachinesLock for ha-163902: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:15:32.403973  458310 start.go:364] duration metric: took 32.19µs to acquireMachinesLock for "ha-163902"
	I0819 19:15:32.403989  458310 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:15:32.403994  458310 fix.go:54] fixHost starting: 
	I0819 19:15:32.404246  458310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:15:32.404276  458310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:15:32.419758  458310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34149
	I0819 19:15:32.420308  458310 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:15:32.420895  458310 main.go:141] libmachine: Using API Version  1
	I0819 19:15:32.420925  458310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:15:32.421339  458310 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:15:32.421561  458310 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:15:32.421753  458310 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:15:32.423555  458310 fix.go:112] recreateIfNeeded on ha-163902: state=Running err=<nil>
	W0819 19:15:32.423580  458310 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:15:32.425714  458310 out.go:177] * Updating the running kvm2 "ha-163902" VM ...
	I0819 19:15:32.427087  458310 machine.go:93] provisionDockerMachine start ...
	I0819 19:15:32.427117  458310 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:15:32.427503  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:15:32.430552  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.431221  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:15:32.431255  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.431496  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:15:32.431746  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:15:32.431920  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:15:32.432042  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:15:32.432228  458310 main.go:141] libmachine: Using SSH client type: native
	I0819 19:15:32.432430  458310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:15:32.432444  458310 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:15:32.537930  458310 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-163902
	
	I0819 19:15:32.537960  458310 main.go:141] libmachine: (ha-163902) Calling .GetMachineName
	I0819 19:15:32.538287  458310 buildroot.go:166] provisioning hostname "ha-163902"
	I0819 19:15:32.538314  458310 main.go:141] libmachine: (ha-163902) Calling .GetMachineName
	I0819 19:15:32.538547  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:15:32.541554  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.541994  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:15:32.542023  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.542257  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:15:32.542473  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:15:32.542651  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:15:32.542854  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:15:32.543057  458310 main.go:141] libmachine: Using SSH client type: native
	I0819 19:15:32.543273  458310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:15:32.543289  458310 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-163902 && echo "ha-163902" | sudo tee /etc/hostname
	I0819 19:15:32.666907  458310 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-163902
	
	I0819 19:15:32.666955  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:15:32.669696  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.670010  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:15:32.670042  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.670206  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:15:32.670396  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:15:32.670613  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:15:32.670751  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:15:32.670948  458310 main.go:141] libmachine: Using SSH client type: native
	I0819 19:15:32.671172  458310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:15:32.671190  458310 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-163902' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-163902/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-163902' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:15:32.778233  458310 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:15:32.778269  458310 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 19:15:32.778324  458310 buildroot.go:174] setting up certificates
	I0819 19:15:32.778338  458310 provision.go:84] configureAuth start
	I0819 19:15:32.778353  458310 main.go:141] libmachine: (ha-163902) Calling .GetMachineName
	I0819 19:15:32.778705  458310 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:15:32.781189  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.781587  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:15:32.781620  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.781750  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:15:32.784099  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.784506  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:15:32.784536  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.784683  458310 provision.go:143] copyHostCerts
	I0819 19:15:32.784714  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:15:32.784755  458310 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 19:15:32.784779  458310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:15:32.784863  458310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 19:15:32.784964  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:15:32.784988  458310 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 19:15:32.784997  458310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:15:32.785035  458310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 19:15:32.785091  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:15:32.785115  458310 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 19:15:32.785124  458310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:15:32.785179  458310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 19:15:32.785245  458310 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.ha-163902 san=[127.0.0.1 192.168.39.227 ha-163902 localhost minikube]
	I0819 19:15:32.881608  458310 provision.go:177] copyRemoteCerts
	I0819 19:15:32.881678  458310 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:15:32.881705  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:15:32.884812  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.885289  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:15:32.885325  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.885589  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:15:32.885840  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:15:32.886075  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:15:32.886282  458310 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:15:32.969041  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 19:15:32.969182  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:15:33.003032  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 19:15:33.003129  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0819 19:15:33.032035  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 19:15:33.032111  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:15:33.058659  458310 provision.go:87] duration metric: took 280.307466ms to configureAuth
	I0819 19:15:33.058694  458310 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:15:33.058919  458310 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:15:33.058998  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:15:33.061861  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:33.062295  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:15:33.062318  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:33.062560  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:15:33.062813  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:15:33.062985  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:15:33.063134  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:15:33.063278  458310 main.go:141] libmachine: Using SSH client type: native
	I0819 19:15:33.063471  458310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:15:33.063499  458310 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:17:04.000215  458310 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:17:04.000269  458310 machine.go:96] duration metric: took 1m31.573162564s to provisionDockerMachine
	I0819 19:17:04.000287  458310 start.go:293] postStartSetup for "ha-163902" (driver="kvm2")
	I0819 19:17:04.000308  458310 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:17:04.000333  458310 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:17:04.000730  458310 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:17:04.000762  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:17:04.003953  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.004486  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:17:04.004517  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.004701  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:17:04.004908  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:17:04.005045  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:17:04.005182  458310 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:17:04.088269  458310 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:17:04.092949  458310 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:17:04.092980  458310 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 19:17:04.093058  458310 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 19:17:04.093157  458310 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 19:17:04.093168  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /etc/ssl/certs/4381592.pem
	I0819 19:17:04.093265  458310 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:17:04.103192  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:17:04.127346  458310 start.go:296] duration metric: took 127.041833ms for postStartSetup
	I0819 19:17:04.127402  458310 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:17:04.127817  458310 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 19:17:04.127848  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:17:04.130645  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.131135  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:17:04.131190  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.131414  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:17:04.131650  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:17:04.131821  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:17:04.131987  458310 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	W0819 19:17:04.211896  458310 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0819 19:17:04.211928  458310 fix.go:56] duration metric: took 1m31.807933704s for fixHost
	I0819 19:17:04.211952  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:17:04.214743  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.215117  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:17:04.215162  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.215379  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:17:04.215616  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:17:04.215798  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:17:04.215944  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:17:04.216114  458310 main.go:141] libmachine: Using SSH client type: native
	I0819 19:17:04.216297  458310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:17:04.216307  458310 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:17:04.318130  458310 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724095024.265455939
	
	I0819 19:17:04.318164  458310 fix.go:216] guest clock: 1724095024.265455939
	I0819 19:17:04.318173  458310 fix.go:229] Guest: 2024-08-19 19:17:04.265455939 +0000 UTC Remote: 2024-08-19 19:17:04.211936554 +0000 UTC m=+91.949434439 (delta=53.519385ms)
	I0819 19:17:04.318195  458310 fix.go:200] guest clock delta is within tolerance: 53.519385ms
	I0819 19:17:04.318203  458310 start.go:83] releasing machines lock for "ha-163902", held for 1m31.91421829s
	I0819 19:17:04.318228  458310 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:17:04.318506  458310 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:17:04.321421  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.321855  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:17:04.321882  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.322061  458310 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:17:04.322666  458310 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:17:04.322856  458310 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:17:04.322954  458310 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:17:04.323028  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:17:04.323062  458310 ssh_runner.go:195] Run: cat /version.json
	I0819 19:17:04.323082  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:17:04.325353  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.325633  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:17:04.325659  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.325682  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.325865  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:17:04.326065  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:17:04.326178  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:17:04.326186  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:17:04.326202  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.326325  458310 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:17:04.326380  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:17:04.326532  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:17:04.326689  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:17:04.326837  458310 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:17:04.428691  458310 ssh_runner.go:195] Run: systemctl --version
	I0819 19:17:04.434889  458310 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:17:04.590668  458310 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:17:04.596793  458310 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:17:04.596883  458310 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:17:04.606251  458310 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 19:17:04.606280  458310 start.go:495] detecting cgroup driver to use...
	I0819 19:17:04.606357  458310 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:17:04.626815  458310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:17:04.646511  458310 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:17:04.646583  458310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:17:04.667636  458310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:17:04.682111  458310 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:17:04.847240  458310 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:17:04.993719  458310 docker.go:233] disabling docker service ...
	I0819 19:17:04.993810  458310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:17:05.011211  458310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:17:05.026695  458310 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:17:05.179580  458310 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:17:05.325039  458310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:17:05.339477  458310 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:17:05.358706  458310 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:17:05.358784  458310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:17:05.369817  458310 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:17:05.369897  458310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:17:05.381082  458310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:17:05.392642  458310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:17:05.403832  458310 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:17:05.415556  458310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:17:05.427195  458310 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:17:05.438176  458310 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:17:05.449088  458310 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:17:05.459615  458310 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:17:05.470683  458310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:17:05.619095  458310 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:17:13.762513  458310 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.143368236s)
	I0819 19:17:13.762545  458310 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:17:13.762608  458310 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:17:13.767753  458310 start.go:563] Will wait 60s for crictl version
	I0819 19:17:13.767833  458310 ssh_runner.go:195] Run: which crictl
	I0819 19:17:13.771516  458310 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:17:13.803961  458310 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:17:13.804049  458310 ssh_runner.go:195] Run: crio --version
	I0819 19:17:13.832688  458310 ssh_runner.go:195] Run: crio --version
	I0819 19:17:13.862955  458310 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:17:13.864365  458310 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:17:13.866970  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:13.867374  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:17:13.867405  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:13.867628  458310 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 19:17:13.872654  458310 kubeadm.go:883] updating cluster {Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.59 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:17:13.872877  458310 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:17:13.872942  458310 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:17:13.925087  458310 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:17:13.925112  458310 crio.go:433] Images already preloaded, skipping extraction
	I0819 19:17:13.925176  458310 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:17:13.958501  458310 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:17:13.958537  458310 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:17:13.958547  458310 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.31.0 crio true true} ...
	I0819 19:17:13.958644  458310 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-163902 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:17:13.958712  458310 ssh_runner.go:195] Run: crio config
	I0819 19:17:14.020413  458310 cni.go:84] Creating CNI manager for ""
	I0819 19:17:14.020437  458310 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 19:17:14.020449  458310 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:17:14.020477  458310 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-163902 NodeName:ha-163902 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:17:14.020632  458310 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-163902"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:17:14.020654  458310 kube-vip.go:115] generating kube-vip config ...
	I0819 19:17:14.020710  458310 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 19:17:14.032530  458310 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 19:17:14.032634  458310 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 19:17:14.032691  458310 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:17:14.043070  458310 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:17:14.043151  458310 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 19:17:14.053711  458310 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0819 19:17:14.071050  458310 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:17:14.088047  458310 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0819 19:17:14.105285  458310 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 19:17:14.124259  458310 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 19:17:14.128565  458310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:17:14.273759  458310 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:17:14.288848  458310 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902 for IP: 192.168.39.227
	I0819 19:17:14.288880  458310 certs.go:194] generating shared ca certs ...
	I0819 19:17:14.288907  458310 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:17:14.289086  458310 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 19:17:14.289154  458310 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 19:17:14.289170  458310 certs.go:256] generating profile certs ...
	I0819 19:17:14.289257  458310 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.key
	I0819 19:17:14.289292  458310 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.cada45d2
	I0819 19:17:14.289315  458310 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.cada45d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227 192.168.39.162 192.168.39.59 192.168.39.254]
	I0819 19:17:14.470970  458310 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.cada45d2 ...
	I0819 19:17:14.471004  458310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.cada45d2: {Name:mk97b8324aec57377fbcdea1ffa69849c0be6bfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:17:14.471173  458310 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.cada45d2 ...
	I0819 19:17:14.471185  458310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.cada45d2: {Name:mk44f37a5a74a4ac4422be5fb78ed86c85ebcf19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:17:14.471253  458310 certs.go:381] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.cada45d2 -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt
	I0819 19:17:14.471405  458310 certs.go:385] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.cada45d2 -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key
	I0819 19:17:14.471526  458310 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key
	I0819 19:17:14.471545  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 19:17:14.471560  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 19:17:14.471572  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 19:17:14.471581  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 19:17:14.471593  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 19:17:14.471603  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 19:17:14.471615  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 19:17:14.471624  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 19:17:14.471671  458310 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 19:17:14.471698  458310 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 19:17:14.471707  458310 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 19:17:14.471729  458310 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:17:14.471758  458310 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:17:14.471782  458310 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 19:17:14.471819  458310 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:17:14.471844  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /usr/share/ca-certificates/4381592.pem
	I0819 19:17:14.471857  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:17:14.471868  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem -> /usr/share/ca-certificates/438159.pem
	I0819 19:17:14.472433  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:17:14.498015  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:17:14.522790  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:17:14.547630  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 19:17:14.573874  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 19:17:14.598716  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:17:14.623610  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:17:14.649218  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:17:14.675412  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 19:17:14.700408  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:17:14.725232  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 19:17:14.749879  458310 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:17:14.767067  458310 ssh_runner.go:195] Run: openssl version
	I0819 19:17:14.772898  458310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:17:14.784149  458310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:17:14.788746  458310 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:17:14.788819  458310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:17:14.794862  458310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:17:14.804931  458310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 19:17:14.816021  458310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 19:17:14.820802  458310 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 19:17:14.820865  458310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 19:17:14.826698  458310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 19:17:14.836637  458310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 19:17:14.847774  458310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 19:17:14.852627  458310 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 19:17:14.852698  458310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 19:17:14.858315  458310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:17:14.867927  458310 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:17:14.872905  458310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:17:14.878680  458310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:17:14.884545  458310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:17:14.890288  458310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:17:14.896616  458310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:17:14.902627  458310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:17:14.908236  458310 kubeadm.go:392] StartCluster: {Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.59 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:17:14.908372  458310 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:17:14.908421  458310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:17:14.945183  458310 cri.go:89] found id: "4cc8ba41d8cdd81b9ab345470fb3e91b985359c81d50e13b5389284e1e6a3b8c"
	I0819 19:17:14.945206  458310 cri.go:89] found id: "87bc6b08ac735cbe640bfc9921c1ff87a6eca1047a9c4e40b3efcc4fa384a480"
	I0819 19:17:14.945209  458310 cri.go:89] found id: "5cfb4b337ff7bdc8c86acefbd6abfbfdc390e5b523892c04a98267b224398180"
	I0819 19:17:14.945213  458310 cri.go:89] found id: "259a75894a0e7bb2cdc9cd2b3363c853666a4c083456ff5d18e290b19f68e62d"
	I0819 19:17:14.945215  458310 cri.go:89] found id: "920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5"
	I0819 19:17:14.945218  458310 cri.go:89] found id: "e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816"
	I0819 19:17:14.945223  458310 cri.go:89] found id: "2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2"
	I0819 19:17:14.945225  458310 cri.go:89] found id: "db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd"
	I0819 19:17:14.945228  458310 cri.go:89] found id: "4f34db6fe664bceaf0e9d708d1611192c9077289166c5e6eb41e36c67d759f40"
	I0819 19:17:14.945234  458310 cri.go:89] found id: "4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca"
	I0819 19:17:14.945236  458310 cri.go:89] found id: "63a9dbc3e9af7e2076b8b04c548b2a1cb5c385838357a95e7bc2a86709324ae5"
	I0819 19:17:14.945239  458310 cri.go:89] found id: "8fca5e9aea9309a38211d91fdbe50e67041a24d8dc547a7cff8edefeb4c57ae6"
	I0819 19:17:14.945241  458310 cri.go:89] found id: "d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732"
	I0819 19:17:14.945244  458310 cri.go:89] found id: ""
	I0819 19:17:14.945292  458310 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.793555028Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095215793528572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e45c7065-ebcf-46ea-be6e-5d09b5b9ea41 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.794246752Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ea82f60-ef47-4a5b-82f0-bdd5521866ce name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.794322027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ea82f60-ef47-4a5b-82f0-bdd5521866ce name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.794793723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0bca30016c03bcea841c86b727ac0060d8f65be32595f91b8d45aa0580826225,PodSandboxId:f175d67a855adfa2b53b141c24c239cd7f081798ea618bdea4472a291ecc0657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724095123374479770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994f082e7404b797402b183967d58a813905219b02a9f4a28db44e4196488b94,PodSandboxId:4636cb498c3d303813e14d219ffeaefb55ab4fab1886e92046b3e30fc1b4813a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724095116374514664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cd1f97eb38ac02d43f6ffcc43488e9af5b265e9f756b1d495ed3349c40726e7,PodSandboxId:f468939e669cd7542215698c4edf02f3ee86cc55024a9551743844812ce76e80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724095085376542835,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cb2b38899974af1867564b1152197f3906ab54765c23bcd44911a7a5ca28be5,PodSandboxId:f175d67a855adfa2b53b141c24c239cd7f081798ea618bdea4472a291ecc0657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724095080373407284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d70c14a96546fb6ec5ddf99807b73aef57277826acf7fee28e5624d2558fbf26,PodSandboxId:721b9413072006da57e99aefe4690db2a48263d82b0060a81e596026a2178cf1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724095074759489851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9313e58051c6a0409400c754736819f17e02fce650219c44b2b173742cea39f1,PodSandboxId:4636cb498c3d303813e14d219ffeaefb55ab4fab1886e92046b3e30fc1b4813a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724095073830784091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca3a34d99a8c33c15214512a1c85be8b0abdf6c4fc44721cd0b7910eb4dd026,PodSandboxId:a37935e6223fb69ff8606b4a898ccde537b356d908309ec97d0b8e2bd5829bf1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724095055913094790,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5810f2181e3f187cb6813890f6c942,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e83d9b3a38071761ae1db6a7e164216729a64bc610a8238c5e743cc55f334dd,PodSandboxId:24644424fc1bde1f3bfbe432c3a5224b717420da7cf876655deb987c5833aab7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724095041364930383,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:554de3a8cfd0306b14e037d2df2736c671fafe4053cda737537da92371c6a2d5,PodSandboxId:c160ca374efd41f181dec40cf7fd25bcc18dd0fa99b07f7e5ac5ecd3c375cf2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095041417550271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b29ce7ae2e5b57b8b7dd3fb18e375f5fb5fa32dd1c57ae7361d52ac21e4317f0,PodSandboxId:68e9795b79bd4ad7374c90dfc587d0982769d6efd09c19cb96acd612e18a01ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095041488423892,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-992a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1062663de6014874208d7e918cf4e228d79e7643a88797435170333e7c0a68,PodSandboxId:713514c53fd20982de9e5e25da9d0234ffc6ae16655b57d8cfb57210f6129c1c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724095041321130204,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163
902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c41e6ee62e55f65d9881f0a06abf0a05e7082c63e2db15f2d042c6b777eab00,PodSandboxId:330b4df5bbcc5a559549d8d5ba3be8e3d1fd3d85138c81046458b39d43fc8bfa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724095041278040198,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2126230b10e8f0116efe5b0c00eb69b8334aced694d463f9bd0dfac35e3fb55a,PodSandboxId:f468939e669cd7542215698c4edf02f3ee86cc55024a9551743844812ce76e80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724095041171347214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3be8d692f7f209eaa8f4c8738aae27df1a1518dff4906b9e43314dcc8cf9114,PodSandboxId:0520ed277638a65149668786945baac3abdc0bf6ab5fbe431edc0e278ded02fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724095041066385729,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Anno
tations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02444059f768b434bec5a2fd4b0d4fbb732968f540047c4aa7b8a99a1c65bb7d,PodSandboxId:eb7a960ca621f28de1cf7df7ee989ec445634c3424342a6e021968e7cdf42a07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724094532962277862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5,PodSandboxId:ccb6b229e5b0f48fa137a962df49ea8e4097412f9c8e6af3d688045b8f853f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094393388258222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816,PodSandboxId:17befe587bdb85618dca3e7976f660e17302ba9f247c3ccf2ac0ac81f4a9b659,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094393363347208,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-992a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2,PodSandboxId:10a016c587c22e7d86b0df8374fb970bc55d97e30756c8f1221e21e974ef8796,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724094381626774299,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd,PodSandboxId:5f1f61689816115acade7df9764d44551ec4f4c0166b634e72db9ca6205df7f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724094378012686594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca,PodSandboxId:644e4a4ea97f11366a62fe5805da67ba956c5f041ae16a8cd9bbb066ce4e0622,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724094367193567094,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732,PodSandboxId:8f73fd805b78d3b87e7ca93e3d523d088e6dd941a647e215524cab0b016ee42e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724094367106237335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ea82f60-ef47-4a5b-82f0-bdd5521866ce name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.837878440Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=871390e1-5229-4a57-b159-c8809275b081 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.837960353Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=871390e1-5229-4a57-b159-c8809275b081 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.839376994Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a85c227d-97f3-49dc-b37a-6f5fbf1c867a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.839868884Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095215839840915,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a85c227d-97f3-49dc-b37a-6f5fbf1c867a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.840444173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e50e6cdd-5101-4b7b-9ade-94b7f71ecfa9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.840522623Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e50e6cdd-5101-4b7b-9ade-94b7f71ecfa9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.841102812Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0bca30016c03bcea841c86b727ac0060d8f65be32595f91b8d45aa0580826225,PodSandboxId:f175d67a855adfa2b53b141c24c239cd7f081798ea618bdea4472a291ecc0657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724095123374479770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994f082e7404b797402b183967d58a813905219b02a9f4a28db44e4196488b94,PodSandboxId:4636cb498c3d303813e14d219ffeaefb55ab4fab1886e92046b3e30fc1b4813a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724095116374514664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cd1f97eb38ac02d43f6ffcc43488e9af5b265e9f756b1d495ed3349c40726e7,PodSandboxId:f468939e669cd7542215698c4edf02f3ee86cc55024a9551743844812ce76e80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724095085376542835,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cb2b38899974af1867564b1152197f3906ab54765c23bcd44911a7a5ca28be5,PodSandboxId:f175d67a855adfa2b53b141c24c239cd7f081798ea618bdea4472a291ecc0657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724095080373407284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d70c14a96546fb6ec5ddf99807b73aef57277826acf7fee28e5624d2558fbf26,PodSandboxId:721b9413072006da57e99aefe4690db2a48263d82b0060a81e596026a2178cf1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724095074759489851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9313e58051c6a0409400c754736819f17e02fce650219c44b2b173742cea39f1,PodSandboxId:4636cb498c3d303813e14d219ffeaefb55ab4fab1886e92046b3e30fc1b4813a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724095073830784091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca3a34d99a8c33c15214512a1c85be8b0abdf6c4fc44721cd0b7910eb4dd026,PodSandboxId:a37935e6223fb69ff8606b4a898ccde537b356d908309ec97d0b8e2bd5829bf1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724095055913094790,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5810f2181e3f187cb6813890f6c942,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e83d9b3a38071761ae1db6a7e164216729a64bc610a8238c5e743cc55f334dd,PodSandboxId:24644424fc1bde1f3bfbe432c3a5224b717420da7cf876655deb987c5833aab7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724095041364930383,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:554de3a8cfd0306b14e037d2df2736c671fafe4053cda737537da92371c6a2d5,PodSandboxId:c160ca374efd41f181dec40cf7fd25bcc18dd0fa99b07f7e5ac5ecd3c375cf2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095041417550271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b29ce7ae2e5b57b8b7dd3fb18e375f5fb5fa32dd1c57ae7361d52ac21e4317f0,PodSandboxId:68e9795b79bd4ad7374c90dfc587d0982769d6efd09c19cb96acd612e18a01ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095041488423892,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-992a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1062663de6014874208d7e918cf4e228d79e7643a88797435170333e7c0a68,PodSandboxId:713514c53fd20982de9e5e25da9d0234ffc6ae16655b57d8cfb57210f6129c1c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724095041321130204,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163
902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c41e6ee62e55f65d9881f0a06abf0a05e7082c63e2db15f2d042c6b777eab00,PodSandboxId:330b4df5bbcc5a559549d8d5ba3be8e3d1fd3d85138c81046458b39d43fc8bfa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724095041278040198,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2126230b10e8f0116efe5b0c00eb69b8334aced694d463f9bd0dfac35e3fb55a,PodSandboxId:f468939e669cd7542215698c4edf02f3ee86cc55024a9551743844812ce76e80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724095041171347214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3be8d692f7f209eaa8f4c8738aae27df1a1518dff4906b9e43314dcc8cf9114,PodSandboxId:0520ed277638a65149668786945baac3abdc0bf6ab5fbe431edc0e278ded02fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724095041066385729,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Anno
tations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02444059f768b434bec5a2fd4b0d4fbb732968f540047c4aa7b8a99a1c65bb7d,PodSandboxId:eb7a960ca621f28de1cf7df7ee989ec445634c3424342a6e021968e7cdf42a07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724094532962277862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5,PodSandboxId:ccb6b229e5b0f48fa137a962df49ea8e4097412f9c8e6af3d688045b8f853f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094393388258222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816,PodSandboxId:17befe587bdb85618dca3e7976f660e17302ba9f247c3ccf2ac0ac81f4a9b659,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094393363347208,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-992a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2,PodSandboxId:10a016c587c22e7d86b0df8374fb970bc55d97e30756c8f1221e21e974ef8796,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724094381626774299,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd,PodSandboxId:5f1f61689816115acade7df9764d44551ec4f4c0166b634e72db9ca6205df7f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724094378012686594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca,PodSandboxId:644e4a4ea97f11366a62fe5805da67ba956c5f041ae16a8cd9bbb066ce4e0622,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724094367193567094,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732,PodSandboxId:8f73fd805b78d3b87e7ca93e3d523d088e6dd941a647e215524cab0b016ee42e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724094367106237335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e50e6cdd-5101-4b7b-9ade-94b7f71ecfa9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.888965907Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c35a0a2f-dea2-4feb-9bf4-cf0647f4fd3e name=/runtime.v1.RuntimeService/Version
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.889044056Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c35a0a2f-dea2-4feb-9bf4-cf0647f4fd3e name=/runtime.v1.RuntimeService/Version
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.890400307Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7d8cf28-e786-429e-b01b-e4f691c0cbf9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.890847677Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095215890823725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7d8cf28-e786-429e-b01b-e4f691c0cbf9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.891449676Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c406e47d-19d8-46d5-aaaa-b15801b3f791 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.891505678Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c406e47d-19d8-46d5-aaaa-b15801b3f791 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.891904188Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0bca30016c03bcea841c86b727ac0060d8f65be32595f91b8d45aa0580826225,PodSandboxId:f175d67a855adfa2b53b141c24c239cd7f081798ea618bdea4472a291ecc0657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724095123374479770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994f082e7404b797402b183967d58a813905219b02a9f4a28db44e4196488b94,PodSandboxId:4636cb498c3d303813e14d219ffeaefb55ab4fab1886e92046b3e30fc1b4813a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724095116374514664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cd1f97eb38ac02d43f6ffcc43488e9af5b265e9f756b1d495ed3349c40726e7,PodSandboxId:f468939e669cd7542215698c4edf02f3ee86cc55024a9551743844812ce76e80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724095085376542835,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cb2b38899974af1867564b1152197f3906ab54765c23bcd44911a7a5ca28be5,PodSandboxId:f175d67a855adfa2b53b141c24c239cd7f081798ea618bdea4472a291ecc0657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724095080373407284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d70c14a96546fb6ec5ddf99807b73aef57277826acf7fee28e5624d2558fbf26,PodSandboxId:721b9413072006da57e99aefe4690db2a48263d82b0060a81e596026a2178cf1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724095074759489851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9313e58051c6a0409400c754736819f17e02fce650219c44b2b173742cea39f1,PodSandboxId:4636cb498c3d303813e14d219ffeaefb55ab4fab1886e92046b3e30fc1b4813a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724095073830784091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca3a34d99a8c33c15214512a1c85be8b0abdf6c4fc44721cd0b7910eb4dd026,PodSandboxId:a37935e6223fb69ff8606b4a898ccde537b356d908309ec97d0b8e2bd5829bf1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724095055913094790,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5810f2181e3f187cb6813890f6c942,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e83d9b3a38071761ae1db6a7e164216729a64bc610a8238c5e743cc55f334dd,PodSandboxId:24644424fc1bde1f3bfbe432c3a5224b717420da7cf876655deb987c5833aab7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724095041364930383,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:554de3a8cfd0306b14e037d2df2736c671fafe4053cda737537da92371c6a2d5,PodSandboxId:c160ca374efd41f181dec40cf7fd25bcc18dd0fa99b07f7e5ac5ecd3c375cf2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095041417550271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b29ce7ae2e5b57b8b7dd3fb18e375f5fb5fa32dd1c57ae7361d52ac21e4317f0,PodSandboxId:68e9795b79bd4ad7374c90dfc587d0982769d6efd09c19cb96acd612e18a01ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095041488423892,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-992a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1062663de6014874208d7e918cf4e228d79e7643a88797435170333e7c0a68,PodSandboxId:713514c53fd20982de9e5e25da9d0234ffc6ae16655b57d8cfb57210f6129c1c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724095041321130204,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163
902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c41e6ee62e55f65d9881f0a06abf0a05e7082c63e2db15f2d042c6b777eab00,PodSandboxId:330b4df5bbcc5a559549d8d5ba3be8e3d1fd3d85138c81046458b39d43fc8bfa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724095041278040198,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2126230b10e8f0116efe5b0c00eb69b8334aced694d463f9bd0dfac35e3fb55a,PodSandboxId:f468939e669cd7542215698c4edf02f3ee86cc55024a9551743844812ce76e80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724095041171347214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3be8d692f7f209eaa8f4c8738aae27df1a1518dff4906b9e43314dcc8cf9114,PodSandboxId:0520ed277638a65149668786945baac3abdc0bf6ab5fbe431edc0e278ded02fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724095041066385729,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Anno
tations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02444059f768b434bec5a2fd4b0d4fbb732968f540047c4aa7b8a99a1c65bb7d,PodSandboxId:eb7a960ca621f28de1cf7df7ee989ec445634c3424342a6e021968e7cdf42a07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724094532962277862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5,PodSandboxId:ccb6b229e5b0f48fa137a962df49ea8e4097412f9c8e6af3d688045b8f853f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094393388258222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816,PodSandboxId:17befe587bdb85618dca3e7976f660e17302ba9f247c3ccf2ac0ac81f4a9b659,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094393363347208,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-992a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2,PodSandboxId:10a016c587c22e7d86b0df8374fb970bc55d97e30756c8f1221e21e974ef8796,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724094381626774299,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd,PodSandboxId:5f1f61689816115acade7df9764d44551ec4f4c0166b634e72db9ca6205df7f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724094378012686594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca,PodSandboxId:644e4a4ea97f11366a62fe5805da67ba956c5f041ae16a8cd9bbb066ce4e0622,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724094367193567094,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732,PodSandboxId:8f73fd805b78d3b87e7ca93e3d523d088e6dd941a647e215524cab0b016ee42e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724094367106237335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c406e47d-19d8-46d5-aaaa-b15801b3f791 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.934300392Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=604d7b73-a6f4-405e-812c-f207cbabd0e6 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.934392303Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=604d7b73-a6f4-405e-812c-f207cbabd0e6 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.936063839Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8886c501-a3b0-44a3-ab35-6e5741d8828d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.937299908Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095215937242488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8886c501-a3b0-44a3-ab35-6e5741d8828d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.938296800Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65a5bcdf-ce71-4535-9514-370300382198 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.938355503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65a5bcdf-ce71-4535-9514-370300382198 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:15 ha-163902 crio[3608]: time="2024-08-19 19:20:15.938762563Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0bca30016c03bcea841c86b727ac0060d8f65be32595f91b8d45aa0580826225,PodSandboxId:f175d67a855adfa2b53b141c24c239cd7f081798ea618bdea4472a291ecc0657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724095123374479770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994f082e7404b797402b183967d58a813905219b02a9f4a28db44e4196488b94,PodSandboxId:4636cb498c3d303813e14d219ffeaefb55ab4fab1886e92046b3e30fc1b4813a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724095116374514664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cd1f97eb38ac02d43f6ffcc43488e9af5b265e9f756b1d495ed3349c40726e7,PodSandboxId:f468939e669cd7542215698c4edf02f3ee86cc55024a9551743844812ce76e80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724095085376542835,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cb2b38899974af1867564b1152197f3906ab54765c23bcd44911a7a5ca28be5,PodSandboxId:f175d67a855adfa2b53b141c24c239cd7f081798ea618bdea4472a291ecc0657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724095080373407284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d70c14a96546fb6ec5ddf99807b73aef57277826acf7fee28e5624d2558fbf26,PodSandboxId:721b9413072006da57e99aefe4690db2a48263d82b0060a81e596026a2178cf1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724095074759489851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9313e58051c6a0409400c754736819f17e02fce650219c44b2b173742cea39f1,PodSandboxId:4636cb498c3d303813e14d219ffeaefb55ab4fab1886e92046b3e30fc1b4813a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724095073830784091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca3a34d99a8c33c15214512a1c85be8b0abdf6c4fc44721cd0b7910eb4dd026,PodSandboxId:a37935e6223fb69ff8606b4a898ccde537b356d908309ec97d0b8e2bd5829bf1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724095055913094790,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5810f2181e3f187cb6813890f6c942,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e83d9b3a38071761ae1db6a7e164216729a64bc610a8238c5e743cc55f334dd,PodSandboxId:24644424fc1bde1f3bfbe432c3a5224b717420da7cf876655deb987c5833aab7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724095041364930383,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:554de3a8cfd0306b14e037d2df2736c671fafe4053cda737537da92371c6a2d5,PodSandboxId:c160ca374efd41f181dec40cf7fd25bcc18dd0fa99b07f7e5ac5ecd3c375cf2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095041417550271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b29ce7ae2e5b57b8b7dd3fb18e375f5fb5fa32dd1c57ae7361d52ac21e4317f0,PodSandboxId:68e9795b79bd4ad7374c90dfc587d0982769d6efd09c19cb96acd612e18a01ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095041488423892,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-992a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1062663de6014874208d7e918cf4e228d79e7643a88797435170333e7c0a68,PodSandboxId:713514c53fd20982de9e5e25da9d0234ffc6ae16655b57d8cfb57210f6129c1c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724095041321130204,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163
902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c41e6ee62e55f65d9881f0a06abf0a05e7082c63e2db15f2d042c6b777eab00,PodSandboxId:330b4df5bbcc5a559549d8d5ba3be8e3d1fd3d85138c81046458b39d43fc8bfa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724095041278040198,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2126230b10e8f0116efe5b0c00eb69b8334aced694d463f9bd0dfac35e3fb55a,PodSandboxId:f468939e669cd7542215698c4edf02f3ee86cc55024a9551743844812ce76e80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724095041171347214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3be8d692f7f209eaa8f4c8738aae27df1a1518dff4906b9e43314dcc8cf9114,PodSandboxId:0520ed277638a65149668786945baac3abdc0bf6ab5fbe431edc0e278ded02fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724095041066385729,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Anno
tations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02444059f768b434bec5a2fd4b0d4fbb732968f540047c4aa7b8a99a1c65bb7d,PodSandboxId:eb7a960ca621f28de1cf7df7ee989ec445634c3424342a6e021968e7cdf42a07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724094532962277862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5,PodSandboxId:ccb6b229e5b0f48fa137a962df49ea8e4097412f9c8e6af3d688045b8f853f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094393388258222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816,PodSandboxId:17befe587bdb85618dca3e7976f660e17302ba9f247c3ccf2ac0ac81f4a9b659,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094393363347208,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-992a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2,PodSandboxId:10a016c587c22e7d86b0df8374fb970bc55d97e30756c8f1221e21e974ef8796,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724094381626774299,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd,PodSandboxId:5f1f61689816115acade7df9764d44551ec4f4c0166b634e72db9ca6205df7f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724094378012686594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca,PodSandboxId:644e4a4ea97f11366a62fe5805da67ba956c5f041ae16a8cd9bbb066ce4e0622,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724094367193567094,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732,PodSandboxId:8f73fd805b78d3b87e7ca93e3d523d088e6dd941a647e215524cab0b016ee42e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724094367106237335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65a5bcdf-ce71-4535-9514-370300382198 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0bca30016c03b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   f175d67a855ad       storage-provisioner
	994f082e7404b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   3                   4636cb498c3d3       kube-controller-manager-ha-163902
	2cd1f97eb38ac       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Running             kube-apiserver            3                   f468939e669cd       kube-apiserver-ha-163902
	9cb2b38899974       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   f175d67a855ad       storage-provisioner
	d70c14a96546f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   721b941307200       busybox-7dff88458-vlrsr
	9313e58051c6a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Exited              kube-controller-manager   2                   4636cb498c3d3       kube-controller-manager-ha-163902
	1ca3a34d99a8c       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   a37935e6223fb       kube-vip-ha-163902
	b29ce7ae2e5b5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   68e9795b79bd4       coredns-6f6b679f8f-wmp8k
	554de3a8cfd03       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   c160ca374efd4       coredns-6f6b679f8f-nkths
	6e83d9b3a3807       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      2 minutes ago        Running             kube-proxy                1                   24644424fc1bd       kube-proxy-wxrsv
	ed1062663de60       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      2 minutes ago        Running             kube-scheduler            1                   713514c53fd20       kube-scheduler-ha-163902
	7c41e6ee62e55       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   330b4df5bbcc5       kindnet-bpwjl
	2126230b10e8f       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Exited              kube-apiserver            2                   f468939e669cd       kube-apiserver-ha-163902
	a3be8d692f7f2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   0520ed277638a       etcd-ha-163902
	02444059f768b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   eb7a960ca621f       busybox-7dff88458-vlrsr
	920809b3fb8b5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   ccb6b229e5b0f       coredns-6f6b679f8f-nkths
	e3292ee2a24df       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   17befe587bdb8       coredns-6f6b679f8f-wmp8k
	2bde6d659e1cd       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    13 minutes ago       Exited              kindnet-cni               0                   10a016c587c22       kindnet-bpwjl
	db4dd64341a0f       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago       Exited              kube-proxy                0                   5f1f616898161       kube-proxy-wxrsv
	4b31ffd467824       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      14 minutes ago       Exited              kube-scheduler            0                   644e4a4ea97f1       kube-scheduler-ha-163902
	d7785bd28970f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      14 minutes ago       Exited              etcd                      0                   8f73fd805b78d       etcd-ha-163902
	
	
	==> coredns [554de3a8cfd0306b14e037d2df2736c671fafe4053cda737537da92371c6a2d5] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58120->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[17187443]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 19:17:33.074) (total time: 13299ms):
	Trace[17187443]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58120->10.96.0.1:443: read: connection reset by peer 13298ms (19:17:46.373)
	Trace[17187443]: [13.299070301s] [13.299070301s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58120->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5] <==
	[INFO] 10.244.1.2:51524 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001508794s
	[INFO] 10.244.1.2:44203 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105366s
	[INFO] 10.244.1.2:39145 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000196935s
	[INFO] 10.244.1.2:53804 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000174817s
	[INFO] 10.244.0.4:38242 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152582s
	[INFO] 10.244.0.4:50866 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00178155s
	[INFO] 10.244.0.4:41459 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077648s
	[INFO] 10.244.0.4:52991 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001294022s
	[INFO] 10.244.0.4:49760 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077772s
	[INFO] 10.244.2.2:52036 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000184006s
	[INFO] 10.244.2.2:42639 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000139597s
	[INFO] 10.244.1.2:45707 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157857s
	[INFO] 10.244.1.2:55541 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079589s
	[INFO] 10.244.0.4:39107 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114365s
	[INFO] 10.244.0.4:42814 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075113s
	[INFO] 10.244.1.2:45907 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164052s
	[INFO] 10.244.1.2:50977 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168617s
	[INFO] 10.244.1.2:55449 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000213337s
	[INFO] 10.244.1.2:36556 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110937s
	[INFO] 10.244.0.4:58486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000301321s
	[INFO] 10.244.0.4:59114 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075318s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b29ce7ae2e5b57b8b7dd3fb18e375f5fb5fa32dd1c57ae7361d52ac21e4317f0] <==
	Trace[1485927947]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (19:17:36.296)
	Trace[1485927947]: [10.001013657s] [10.001013657s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:56650->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:56650->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:42040->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:42040->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816] <==
	[INFO] 10.244.2.2:54418 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140774s
	[INFO] 10.244.2.2:59184 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158893s
	[INFO] 10.244.2.2:53883 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149814s
	[INFO] 10.244.2.2:35674 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136715s
	[INFO] 10.244.0.4:42875 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138512s
	[INFO] 10.244.0.4:58237 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102142s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1864&timeout=8m44s&timeoutSeconds=524&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1868&timeout=9m27s&timeoutSeconds=567&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1868": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1868": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1864": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1864": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1469055367]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 19:15:19.188) (total time: 12251ms):
	Trace[1469055367]: ---"Objects listed" error:Unauthorized 12251ms (19:15:31.439)
	Trace[1469055367]: [12.251295543s] [12.251295543s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[135777212]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 19:15:19.388) (total time: 12054ms):
	Trace[135777212]: ---"Objects listed" error:Unauthorized 12050ms (19:15:31.439)
	Trace[135777212]: [12.054409384s] [12.054409384s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-163902
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-163902
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=ha-163902
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T19_06_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:06:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-163902
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:20:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:18:03 +0000   Mon, 19 Aug 2024 19:06:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:18:03 +0000   Mon, 19 Aug 2024 19:06:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:18:03 +0000   Mon, 19 Aug 2024 19:06:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:18:03 +0000   Mon, 19 Aug 2024 19:06:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-163902
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d3b52f7c3a144ec8d3a6e98276775f3
	  System UUID:                4d3b52f7-c3a1-44ec-8d3a-6e98276775f3
	  Boot ID:                    26bff1c8-7a07-4ad4-9634-fcbc547b5a26
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vlrsr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-nkths             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-6f6b679f8f-wmp8k             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-163902                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-bpwjl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-163902             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-163902    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-wxrsv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-163902             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-163902                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 2m10s                 kube-proxy       
	  Normal   Starting                 13m                   kube-proxy       
	  Normal   NodeHasSufficientMemory  14m                   kubelet          Node ha-163902 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                   kubelet          Node ha-163902 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m                   kubelet          Node ha-163902 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           14m                   node-controller  Node ha-163902 event: Registered Node ha-163902 in Controller
	  Normal   NodeReady                13m                   kubelet          Node ha-163902 status is now: NodeReady
	  Normal   RegisteredNode           12m                   node-controller  Node ha-163902 event: Registered Node ha-163902 in Controller
	  Normal   RegisteredNode           11m                   node-controller  Node ha-163902 event: Registered Node ha-163902 in Controller
	  Normal   NodeNotReady             3m4s (x4 over 4m18s)  kubelet          Node ha-163902 status is now: NodeNotReady
	  Warning  ContainerGCFailed        3m3s (x2 over 4m3s)   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m15s                 node-controller  Node ha-163902 event: Registered Node ha-163902 in Controller
	  Normal   RegisteredNode           98s                   node-controller  Node ha-163902 event: Registered Node ha-163902 in Controller
	  Normal   RegisteredNode           36s                   node-controller  Node ha-163902 event: Registered Node ha-163902 in Controller
	
	
	Name:               ha-163902-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-163902-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=ha-163902
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T19_07_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:07:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-163902-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:20:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:18:47 +0000   Mon, 19 Aug 2024 19:18:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:18:47 +0000   Mon, 19 Aug 2024 19:18:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:18:47 +0000   Mon, 19 Aug 2024 19:18:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:18:47 +0000   Mon, 19 Aug 2024 19:18:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.162
	  Hostname:    ha-163902-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4ebc4d6f40f47d9854129310dcf34d7
	  System UUID:                d4ebc4d6-f40f-47d9-8541-29310dcf34d7
	  Boot ID:                    f7a7580f-2ef2-42cf-8d82-41ab8ac2dfab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9zj57                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-163902-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-97cnn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-163902-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-163902-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-4whvs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-163902-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-163902-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 113s                   kube-proxy       
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-163902-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-163902-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-163902-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                    node-controller  Node ha-163902-m02 event: Registered Node ha-163902-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-163902-m02 event: Registered Node ha-163902-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-163902-m02 event: Registered Node ha-163902-m02 in Controller
	  Normal  NodeNotReady             9m30s                  node-controller  Node ha-163902-m02 status is now: NodeNotReady
	  Normal  Starting                 2m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m38s (x8 over 2m38s)  kubelet          Node ha-163902-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m38s (x8 over 2m38s)  kubelet          Node ha-163902-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m38s (x7 over 2m38s)  kubelet          Node ha-163902-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m15s                  node-controller  Node ha-163902-m02 event: Registered Node ha-163902-m02 in Controller
	  Normal  RegisteredNode           98s                    node-controller  Node ha-163902-m02 event: Registered Node ha-163902-m02 in Controller
	  Normal  RegisteredNode           36s                    node-controller  Node ha-163902-m02 event: Registered Node ha-163902-m02 in Controller
	
	
	Name:               ha-163902-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-163902-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=ha-163902
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T19_08_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:08:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-163902-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:20:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:19:55 +0000   Mon, 19 Aug 2024 19:19:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:19:55 +0000   Mon, 19 Aug 2024 19:19:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:19:55 +0000   Mon, 19 Aug 2024 19:19:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:19:55 +0000   Mon, 19 Aug 2024 19:19:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.59
	  Hostname:    ha-163902-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4a4497fbf7a43159de7a77620b40e05
	  System UUID:                c4a4497f-bf7a-4315-9de7-a77620b40e05
	  Boot ID:                    4c046eca-e773-492a-a801-a5cbceec7ed7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4hqxq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-163902-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-72q7r                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-163902-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-163902-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-xq852                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-163902-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-163902-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 35s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-163902-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-163902-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-163902-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-163902-m03 event: Registered Node ha-163902-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-163902-m03 event: Registered Node ha-163902-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-163902-m03 event: Registered Node ha-163902-m03 in Controller
	  Normal   RegisteredNode           2m15s              node-controller  Node ha-163902-m03 event: Registered Node ha-163902-m03 in Controller
	  Normal   RegisteredNode           98s                node-controller  Node ha-163902-m03 event: Registered Node ha-163902-m03 in Controller
	  Normal   NodeNotReady             95s                node-controller  Node ha-163902-m03 status is now: NodeNotReady
	  Normal   Starting                 52s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  52s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 52s                kubelet          Node ha-163902-m03 has been rebooted, boot id: 4c046eca-e773-492a-a801-a5cbceec7ed7
	  Normal   NodeHasSufficientMemory  52s (x2 over 52s)  kubelet          Node ha-163902-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    52s (x2 over 52s)  kubelet          Node ha-163902-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     52s (x2 over 52s)  kubelet          Node ha-163902-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                52s                kubelet          Node ha-163902-m03 status is now: NodeReady
	  Normal   RegisteredNode           36s                node-controller  Node ha-163902-m03 event: Registered Node ha-163902-m03 in Controller
	
	
	Name:               ha-163902-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-163902-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=ha-163902
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T19_09_27_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:09:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-163902-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:20:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:20:08 +0000   Mon, 19 Aug 2024 19:20:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:20:08 +0000   Mon, 19 Aug 2024 19:20:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:20:08 +0000   Mon, 19 Aug 2024 19:20:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:20:08 +0000   Mon, 19 Aug 2024 19:20:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.130
	  Hostname:    ha-163902-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d771c9152e0748dca0ecbcee5197aaea
	  System UUID:                d771c915-2e07-48dc-a0ec-bcee5197aaea
	  Boot ID:                    705d7db3-a682-4049-90f8-73fb3118ff6b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-plbmk       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-9b77p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   RegisteredNode           10m                node-controller  Node ha-163902-m04 event: Registered Node ha-163902-m04 in Controller
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-163902-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-163902-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-163902-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-163902-m04 event: Registered Node ha-163902-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-163902-m04 event: Registered Node ha-163902-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-163902-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m15s              node-controller  Node ha-163902-m04 event: Registered Node ha-163902-m04 in Controller
	  Normal   RegisteredNode           98s                node-controller  Node ha-163902-m04 event: Registered Node ha-163902-m04 in Controller
	  Normal   NodeNotReady             95s                node-controller  Node ha-163902-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           36s                node-controller  Node ha-163902-m04 event: Registered Node ha-163902-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-163902-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-163902-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-163902-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-163902-m04 has been rebooted, boot id: 705d7db3-a682-4049-90f8-73fb3118ff6b
	  Normal   NodeReady                8s                 kubelet          Node ha-163902-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.061246] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063525] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.198071] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.117006] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.272735] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[Aug19 19:06] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +3.660968] systemd-fstab-generator[894]: Ignoring "noauto" option for root device
	[  +0.062148] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.174652] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.082985] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.372328] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.696903] kauditd_printk_skb: 23 callbacks suppressed
	[Aug19 19:07] kauditd_printk_skb: 26 callbacks suppressed
	[Aug19 19:17] systemd-fstab-generator[3521]: Ignoring "noauto" option for root device
	[  +0.148321] systemd-fstab-generator[3533]: Ignoring "noauto" option for root device
	[  +0.185110] systemd-fstab-generator[3547]: Ignoring "noauto" option for root device
	[  +0.148360] systemd-fstab-generator[3559]: Ignoring "noauto" option for root device
	[  +0.291698] systemd-fstab-generator[3587]: Ignoring "noauto" option for root device
	[  +8.652850] systemd-fstab-generator[3694]: Ignoring "noauto" option for root device
	[  +0.086368] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.548221] kauditd_printk_skb: 12 callbacks suppressed
	[ +12.096677] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.054784] kauditd_printk_skb: 1 callbacks suppressed
	[Aug19 19:18] kauditd_printk_skb: 5 callbacks suppressed
	[ +13.066788] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [a3be8d692f7f209eaa8f4c8738aae27df1a1518dff4906b9e43314dcc8cf9114] <==
	{"level":"warn","ts":"2024-08-19T19:19:19.171697Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:19:19.236232Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:19:19.336681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:19:19.415936Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:19:19.435968Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:19:19.451757Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:19:19.457268Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:19:19.461689Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:19:19.535772Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T19:19:21.705671Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.59:2380/version","remote-member-id":"8a4d37127f98560a","error":"Get \"https://192.168.39.59:2380/version\": dial tcp 192.168.39.59:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T19:19:21.705729Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8a4d37127f98560a","error":"Get \"https://192.168.39.59:2380/version\": dial tcp 192.168.39.59:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T19:19:21.780449Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8a4d37127f98560a","rtt":"0s","error":"dial tcp 192.168.39.59:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T19:19:21.780462Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8a4d37127f98560a","rtt":"0s","error":"dial tcp 192.168.39.59:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T19:19:25.708697Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.59:2380/version","remote-member-id":"8a4d37127f98560a","error":"Get \"https://192.168.39.59:2380/version\": dial tcp 192.168.39.59:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T19:19:25.708822Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8a4d37127f98560a","error":"Get \"https://192.168.39.59:2380/version\": dial tcp 192.168.39.59:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T19:19:26.781534Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8a4d37127f98560a","rtt":"0s","error":"dial tcp 192.168.39.59:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T19:19:26.781624Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8a4d37127f98560a","rtt":"0s","error":"dial tcp 192.168.39.59:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-19T19:19:29.054730Z","caller":"traceutil/trace.go:171","msg":"trace[1699428507] transaction","detail":"{read_only:false; response_revision:2436; number_of_response:1; }","duration":"133.11457ms","start":"2024-08-19T19:19:28.921583Z","end":"2024-08-19T19:19:29.054698Z","steps":["trace[1699428507] 'process raft request'  (duration: 113.135614ms)","trace[1699428507] 'compare'  (duration: 19.557854ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T19:19:29.543810Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:19:29.544363Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:19:29.546235Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:19:29.554931Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"bcb2eab2b5d0a9fc","to":"8a4d37127f98560a","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-19T19:19:29.555001Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:19:29.559526Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"bcb2eab2b5d0a9fc","to":"8a4d37127f98560a","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-19T19:19:29.559585Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a"}
	
	
	==> etcd [d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732] <==
	{"level":"info","ts":"2024-08-19T19:15:33.182011Z","caller":"traceutil/trace.go:171","msg":"trace[791016477] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; }","duration":"744.789077ms","start":"2024-08-19T19:15:32.437218Z","end":"2024-08-19T19:15:33.182007Z","steps":["trace[791016477] 'agreement among raft nodes before linearized reading'  (duration: 744.768877ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:15:33.182055Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T19:15:32.437208Z","time spent":"744.84079ms","remote":"127.0.0.1:51878","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:10000 "}
	2024/08/19 19:15:33 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-08-19T19:15:33.251934Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"bcb2eab2b5d0a9fc","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-19T19:15:33.252124Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c6b68830659a3c65"}
	{"level":"info","ts":"2024-08-19T19:15:33.252188Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c6b68830659a3c65"}
	{"level":"info","ts":"2024-08-19T19:15:33.252226Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c6b68830659a3c65"}
	{"level":"info","ts":"2024-08-19T19:15:33.252324Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65"}
	{"level":"info","ts":"2024-08-19T19:15:33.252399Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65"}
	{"level":"info","ts":"2024-08-19T19:15:33.252493Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65"}
	{"level":"info","ts":"2024-08-19T19:15:33.252558Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c6b68830659a3c65"}
	{"level":"info","ts":"2024-08-19T19:15:33.252566Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:15:33.252576Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:15:33.252592Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:15:33.252712Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:15:33.252800Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:15:33.252906Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:15:33.252962Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:15:33.255864Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.227:2380"}
	{"level":"warn","ts":"2024-08-19T19:15:33.255959Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.816197001s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-19T19:15:33.255998Z","caller":"traceutil/trace.go:171","msg":"trace[419324862] range","detail":"{range_begin:; range_end:; }","duration":"1.816252837s","start":"2024-08-19T19:15:31.439735Z","end":"2024-08-19T19:15:33.255988Z","steps":["trace[419324862] 'agreement among raft nodes before linearized reading'  (duration: 1.816196874s)"],"step_count":1}
	{"level":"error","ts":"2024-08-19T19:15:33.256054Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-19T19:15:33.256168Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.227:2380"}
	{"level":"info","ts":"2024-08-19T19:15:33.256237Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-163902","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.227:2380"],"advertise-client-urls":["https://192.168.39.227:2379"]}
	
	
	==> kernel <==
	 19:20:16 up 14 min,  0 users,  load average: 0.09, 0.19, 0.16
	Linux ha-163902 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2] <==
	I0819 19:15:12.625961       1 main.go:295] Handling node with IPs: map[192.168.39.59:{}]
	I0819 19:15:12.626003       1 main.go:322] Node ha-163902-m03 has CIDR [10.244.2.0/24] 
	I0819 19:15:12.626204       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0819 19:15:12.626225       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	I0819 19:15:12.626289       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0819 19:15:12.626310       1 main.go:299] handling current node
	I0819 19:15:12.626321       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0819 19:15:12.626326       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	E0819 19:15:13.349631       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1838&timeout=7m34s&timeoutSeconds=454&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0819 19:15:22.625467       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0819 19:15:22.625570       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	I0819 19:15:22.625719       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0819 19:15:22.625753       1 main.go:299] handling current node
	I0819 19:15:22.625776       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0819 19:15:22.625792       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	I0819 19:15:22.625876       1 main.go:295] Handling node with IPs: map[192.168.39.59:{}]
	I0819 19:15:22.625896       1 main.go:322] Node ha-163902-m03 has CIDR [10.244.2.0/24] 
	I0819 19:15:32.629574       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0819 19:15:32.629620       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	I0819 19:15:32.629718       1 main.go:295] Handling node with IPs: map[192.168.39.59:{}]
	I0819 19:15:32.629724       1 main.go:322] Node ha-163902-m03 has CIDR [10.244.2.0/24] 
	I0819 19:15:32.629766       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0819 19:15:32.629784       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	I0819 19:15:32.629860       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0819 19:15:32.629891       1 main.go:299] handling current node
	
	
	==> kindnet [7c41e6ee62e55f65d9881f0a06abf0a05e7082c63e2db15f2d042c6b777eab00] <==
	I0819 19:19:42.285501       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	I0819 19:19:52.287809       1 main.go:295] Handling node with IPs: map[192.168.39.59:{}]
	I0819 19:19:52.287932       1 main.go:322] Node ha-163902-m03 has CIDR [10.244.2.0/24] 
	I0819 19:19:52.288117       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0819 19:19:52.288241       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	I0819 19:19:52.288351       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0819 19:19:52.288383       1 main.go:299] handling current node
	I0819 19:19:52.288410       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0819 19:19:52.288433       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	I0819 19:20:02.287524       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0819 19:20:02.287633       1 main.go:299] handling current node
	I0819 19:20:02.287661       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0819 19:20:02.287679       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	I0819 19:20:02.287813       1 main.go:295] Handling node with IPs: map[192.168.39.59:{}]
	I0819 19:20:02.287898       1 main.go:322] Node ha-163902-m03 has CIDR [10.244.2.0/24] 
	I0819 19:20:02.288008       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0819 19:20:02.288029       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	I0819 19:20:12.286104       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0819 19:20:12.286178       1 main.go:299] handling current node
	I0819 19:20:12.286197       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0819 19:20:12.286213       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	I0819 19:20:12.286386       1 main.go:295] Handling node with IPs: map[192.168.39.59:{}]
	I0819 19:20:12.286413       1 main.go:322] Node ha-163902-m03 has CIDR [10.244.2.0/24] 
	I0819 19:20:12.286486       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0819 19:20:12.286509       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2126230b10e8f0116efe5b0c00eb69b8334aced694d463f9bd0dfac35e3fb55a] <==
	I0819 19:17:21.898184       1 options.go:228] external host was not specified, using 192.168.39.227
	I0819 19:17:21.900480       1 server.go:142] Version: v1.31.0
	I0819 19:17:21.900520       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:17:22.458693       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0819 19:17:22.485210       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 19:17:22.497636       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 19:17:22.499346       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 19:17:22.499617       1 instance.go:232] Using reconciler: lease
	W0819 19:17:42.454095       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0819 19:17:42.454219       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0819 19:17:42.501322       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0819 19:17:42.501368       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [2cd1f97eb38ac02d43f6ffcc43488e9af5b265e9f756b1d495ed3349c40726e7] <==
	I0819 19:18:07.239217       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0819 19:18:07.239231       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0819 19:18:07.335848       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 19:18:07.336422       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 19:18:07.336524       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 19:18:07.336588       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 19:18:07.337193       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 19:18:07.342561       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 19:18:07.342685       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 19:18:07.346936       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 19:18:07.347055       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 19:18:07.347114       1 aggregator.go:171] initial CRD sync complete...
	I0819 19:18:07.347139       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 19:18:07.347223       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 19:18:07.347231       1 cache.go:39] Caches are synced for autoregister controller
	I0819 19:18:07.351496       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 19:18:07.358263       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 19:18:07.358296       1 policy_source.go:224] refreshing policies
	I0819 19:18:07.411572       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0819 19:18:07.450008       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.162 192.168.39.59]
	I0819 19:18:07.452275       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 19:18:07.466883       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0819 19:18:07.476660       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0819 19:18:08.246241       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0819 19:18:08.900973       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.162 192.168.39.227 192.168.39.59]
	
	
	==> kube-controller-manager [9313e58051c6a0409400c754736819f17e02fce650219c44b2b173742cea39f1] <==
	I0819 19:17:54.230726       1 serving.go:386] Generated self-signed cert in-memory
	I0819 19:17:54.727986       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 19:17:54.728080       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:17:54.729524       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 19:17:54.729679       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 19:17:54.729829       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 19:17:54.729905       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0819 19:18:04.732213       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.227:8443/healthz\": dial tcp 192.168.39.227:8443: connect: connection refused"
	
	
	==> kube-controller-manager [994f082e7404b797402b183967d58a813905219b02a9f4a28db44e4196488b94] <==
	I0819 19:18:41.490480       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m03"
	I0819 19:18:41.507995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:18:41.515827       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m03"
	E0819 19:18:41.582429       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"2cc6a476-c903-4661-93fe-3074c6835799\", ResourceVersion:\"2234\", Generation:1, CreationTimestamp:time.Date(2024, time.August, 19, 19, 6, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\\\"
,\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20240813-c6f155d6\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\\\
":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001ca20a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"
\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001b5f8d8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeC
laimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001b5f8f0), EmptyDir:(*v1.EmptyDirVolumeSource)(
nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxV
olumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001b5f908), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), Azu
reFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20240813-c6f155d6\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001ca20c0)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSo
urce)(0xc001ca2100)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false
, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc001b8f9e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralConta
iner(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001c8d0f0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001be5500), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Ov
erhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001c90f40)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001c8d12c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:4, NumberMisscheduled:0, DesiredNumberScheduled:4, NumberReady:4, ObservedGeneration:1, UpdatedNumberScheduled:4, NumberAvailable:4, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0819 19:18:41.734355       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.379235ms"
	I0819 19:18:41.734481       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.923µs"
	I0819 19:18:43.676390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m03"
	I0819 19:18:46.743972       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:18:47.177097       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m02"
	I0819 19:18:53.754664       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:18:56.824847       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m03"
	I0819 19:19:24.741833       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m03"
	I0819 19:19:24.772562       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m03"
	I0819 19:19:25.792476       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="55.601µs"
	I0819 19:19:26.714905       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m03"
	I0819 19:19:40.349204       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:19:40.448077       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:19:46.315765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.45235ms"
	I0819 19:19:46.316138       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.838µs"
	I0819 19:19:55.609818       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m03"
	I0819 19:20:08.489394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:20:08.489805       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-163902-m04"
	I0819 19:20:08.511532       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:20:08.657053       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	
	
	==> kube-proxy [6e83d9b3a38071761ae1db6a7e164216729a64bc610a8238c5e743cc55f334dd] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 19:17:23.526566       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-163902\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 19:17:26.598736       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-163902\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 19:17:29.669970       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-163902\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 19:17:35.816367       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-163902\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 19:17:48.102247       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-163902\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0819 19:18:05.593400       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.227"]
	E0819 19:18:05.593667       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 19:18:05.650666       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 19:18:05.650721       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 19:18:05.650746       1 server_linux.go:169] "Using iptables Proxier"
	I0819 19:18:05.653133       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 19:18:05.653373       1 server.go:483] "Version info" version="v1.31.0"
	I0819 19:18:05.653397       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:18:05.654707       1 config.go:197] "Starting service config controller"
	I0819 19:18:05.654748       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 19:18:05.654770       1 config.go:104] "Starting endpoint slice config controller"
	I0819 19:18:05.654773       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 19:18:05.655382       1 config.go:326] "Starting node config controller"
	I0819 19:18:05.655410       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 19:18:05.755465       1 shared_informer.go:320] Caches are synced for node config
	I0819 19:18:05.755568       1 shared_informer.go:320] Caches are synced for service config
	I0819 19:18:05.755604       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd] <==
	E0819 19:14:14.725689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:14.725732       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1818": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:14.725787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1818\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:14.725862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-163902&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:14.725902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-163902&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:22.277614       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-163902&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:22.277687       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-163902&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:22.277614       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:22.277919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:22.277772       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1818": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:22.278031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1818\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:31.494696       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:31.495305       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:34.566414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-163902&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:34.566548       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-163902&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:37.639450       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1818": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:37.639625       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1818\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:52.998672       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:52.998834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:52.999004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-163902&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:52.999070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-163902&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:59.141687       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1818": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:59.142383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1818\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:15:26.790095       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:15:26.790265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca] <==
	W0819 19:06:11.189865       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 19:06:11.189919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:11.215289       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 19:06:11.215345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0819 19:06:12.630110       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 19:08:50.829572       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9zj57\": pod busybox-7dff88458-9zj57 is already assigned to node \"ha-163902-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-9zj57" node="ha-163902-m03"
	E0819 19:08:50.829705       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9zj57\": pod busybox-7dff88458-9zj57 is already assigned to node \"ha-163902-m02\"" pod="default/busybox-7dff88458-9zj57"
	E0819 19:15:18.298163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0819 19:15:19.648116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0819 19:15:20.875519       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0819 19:15:21.591797       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0819 19:15:22.224948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0819 19:15:23.768329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0819 19:15:25.832664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0819 19:15:26.080814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0819 19:15:26.347850       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0819 19:15:27.257746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0819 19:15:27.935426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0819 19:15:28.886248       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0819 19:15:29.954868       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0819 19:15:31.448251       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	I0819 19:15:33.153089       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0819 19:15:33.153214       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0819 19:15:33.153350       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0819 19:15:33.154714       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ed1062663de6014874208d7e918cf4e228d79e7643a88797435170333e7c0a68] <==
	W0819 19:17:57.769495       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.227:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:17:57.769565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.227:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:17:57.863873       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.227:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:17:57.863990       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.227:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:17:59.007830       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.227:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:17:59.007907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.227:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:17:59.359798       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.227:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:17:59.359847       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.227:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:17:59.421944       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.227:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:17:59.422016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.227:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:17:59.996299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.227:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:17:59.996409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.227:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:18:00.434938       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.227:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:18:00.435001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.227:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:18:01.318507       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.227:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:18:01.318579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.227:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:18:01.933214       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.227:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:18:01.933290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.227:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:18:03.400243       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.227:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:18:03.400299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.227:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:18:03.864754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.227:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:18:03.864875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.227:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:18:04.540761       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.227:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:18:04.540933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.227:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	I0819 19:18:15.813955       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 19:19:03 ha-163902 kubelet[1316]: E0819 19:19:03.739032    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095143738674419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:19:13 ha-163902 kubelet[1316]: E0819 19:19:13.377927    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 19:19:13 ha-163902 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 19:19:13 ha-163902 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 19:19:13 ha-163902 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 19:19:13 ha-163902 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:19:13 ha-163902 kubelet[1316]: E0819 19:19:13.741093    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095153740735334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:19:13 ha-163902 kubelet[1316]: E0819 19:19:13.741201    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095153740735334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:19:23 ha-163902 kubelet[1316]: E0819 19:19:23.743983    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095163743562776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:19:23 ha-163902 kubelet[1316]: E0819 19:19:23.744025    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095163743562776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:19:33 ha-163902 kubelet[1316]: E0819 19:19:33.745920    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095173745508985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:19:33 ha-163902 kubelet[1316]: E0819 19:19:33.746400    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095173745508985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:19:43 ha-163902 kubelet[1316]: E0819 19:19:43.749415    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095183748897894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:19:43 ha-163902 kubelet[1316]: E0819 19:19:43.749795    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095183748897894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:19:53 ha-163902 kubelet[1316]: E0819 19:19:53.752117    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095193751693006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:19:53 ha-163902 kubelet[1316]: E0819 19:19:53.752184    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095193751693006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:20:03 ha-163902 kubelet[1316]: E0819 19:20:03.755529    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095203754632415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:20:03 ha-163902 kubelet[1316]: E0819 19:20:03.755931    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095203754632415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:20:13 ha-163902 kubelet[1316]: E0819 19:20:13.378515    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 19:20:13 ha-163902 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 19:20:13 ha-163902 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 19:20:13 ha-163902 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 19:20:13 ha-163902 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:20:13 ha-163902 kubelet[1316]: E0819 19:20:13.758806    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095213758231173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:20:13 ha-163902 kubelet[1316]: E0819 19:20:13.758924    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095213758231173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:20:15.495561  460264 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-430949/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-163902 -n ha-163902
helpers_test.go:261: (dbg) Run:  kubectl --context ha-163902 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (407.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-163902 stop -v=7 --alsologtostderr: exit status 82 (2m0.486695438s)

                                                
                                                
-- stdout --
	* Stopping node "ha-163902-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:20:34.815997  460674 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:20:34.816146  460674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:20:34.816159  460674 out.go:358] Setting ErrFile to fd 2...
	I0819 19:20:34.816166  460674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:20:34.816349  460674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:20:34.816585  460674 out.go:352] Setting JSON to false
	I0819 19:20:34.816663  460674 mustload.go:65] Loading cluster: ha-163902
	I0819 19:20:34.817012  460674 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:20:34.817099  460674 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:20:34.817308  460674 mustload.go:65] Loading cluster: ha-163902
	I0819 19:20:34.817453  460674 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:20:34.817490  460674 stop.go:39] StopHost: ha-163902-m04
	I0819 19:20:34.817845  460674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:20:34.817888  460674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:20:34.833325  460674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44491
	I0819 19:20:34.833832  460674 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:20:34.834415  460674 main.go:141] libmachine: Using API Version  1
	I0819 19:20:34.834441  460674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:20:34.834781  460674 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:20:34.837007  460674 out.go:177] * Stopping node "ha-163902-m04"  ...
	I0819 19:20:34.838324  460674 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 19:20:34.838357  460674 main.go:141] libmachine: (ha-163902-m04) Calling .DriverName
	I0819 19:20:34.838643  460674 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 19:20:34.838674  460674 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHHostname
	I0819 19:20:34.841620  460674 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:20:34.842061  460674 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:20:03 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:20:34.842094  460674 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:20:34.842263  460674 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHPort
	I0819 19:20:34.842446  460674 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHKeyPath
	I0819 19:20:34.842630  460674 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHUsername
	I0819 19:20:34.842771  460674 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m04/id_rsa Username:docker}
	I0819 19:20:34.931536  460674 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 19:20:34.984135  460674 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 19:20:35.038075  460674 main.go:141] libmachine: Stopping "ha-163902-m04"...
	I0819 19:20:35.038109  460674 main.go:141] libmachine: (ha-163902-m04) Calling .GetState
	I0819 19:20:35.039678  460674 main.go:141] libmachine: (ha-163902-m04) Calling .Stop
	I0819 19:20:35.043418  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 0/120
	I0819 19:20:36.044861  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 1/120
	I0819 19:20:37.047155  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 2/120
	I0819 19:20:38.048603  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 3/120
	I0819 19:20:39.050806  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 4/120
	I0819 19:20:40.053052  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 5/120
	I0819 19:20:41.054547  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 6/120
	I0819 19:20:42.056702  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 7/120
	I0819 19:20:43.058139  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 8/120
	I0819 19:20:44.060211  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 9/120
	I0819 19:20:45.062718  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 10/120
	I0819 19:20:46.064537  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 11/120
	I0819 19:20:47.066011  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 12/120
	I0819 19:20:48.068049  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 13/120
	I0819 19:20:49.069565  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 14/120
	I0819 19:20:50.071563  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 15/120
	I0819 19:20:51.072930  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 16/120
	I0819 19:20:52.074660  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 17/120
	I0819 19:20:53.076168  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 18/120
	I0819 19:20:54.077997  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 19/120
	I0819 19:20:55.080367  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 20/120
	I0819 19:20:56.082080  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 21/120
	I0819 19:20:57.083343  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 22/120
	I0819 19:20:58.085011  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 23/120
	I0819 19:20:59.086663  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 24/120
	I0819 19:21:00.088389  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 25/120
	I0819 19:21:01.090064  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 26/120
	I0819 19:21:02.091694  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 27/120
	I0819 19:21:03.093176  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 28/120
	I0819 19:21:04.094861  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 29/120
	I0819 19:21:05.097157  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 30/120
	I0819 19:21:06.098842  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 31/120
	I0819 19:21:07.100388  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 32/120
	I0819 19:21:08.101891  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 33/120
	I0819 19:21:09.103294  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 34/120
	I0819 19:21:10.105244  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 35/120
	I0819 19:21:11.106599  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 36/120
	I0819 19:21:12.107957  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 37/120
	I0819 19:21:13.109673  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 38/120
	I0819 19:21:14.111159  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 39/120
	I0819 19:21:15.113331  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 40/120
	I0819 19:21:16.115893  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 41/120
	I0819 19:21:17.117896  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 42/120
	I0819 19:21:18.119263  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 43/120
	I0819 19:21:19.120795  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 44/120
	I0819 19:21:20.122700  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 45/120
	I0819 19:21:21.124204  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 46/120
	I0819 19:21:22.125819  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 47/120
	I0819 19:21:23.127369  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 48/120
	I0819 19:21:24.128831  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 49/120
	I0819 19:21:25.130802  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 50/120
	I0819 19:21:26.132286  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 51/120
	I0819 19:21:27.133741  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 52/120
	I0819 19:21:28.135703  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 53/120
	I0819 19:21:29.137052  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 54/120
	I0819 19:21:30.138978  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 55/120
	I0819 19:21:31.140514  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 56/120
	I0819 19:21:32.142182  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 57/120
	I0819 19:21:33.143537  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 58/120
	I0819 19:21:34.144952  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 59/120
	I0819 19:21:35.147261  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 60/120
	I0819 19:21:36.148921  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 61/120
	I0819 19:21:37.150938  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 62/120
	I0819 19:21:38.152528  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 63/120
	I0819 19:21:39.154100  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 64/120
	I0819 19:21:40.156165  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 65/120
	I0819 19:21:41.157658  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 66/120
	I0819 19:21:42.159255  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 67/120
	I0819 19:21:43.160680  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 68/120
	I0819 19:21:44.162313  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 69/120
	I0819 19:21:45.164332  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 70/120
	I0819 19:21:46.165980  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 71/120
	I0819 19:21:47.167570  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 72/120
	I0819 19:21:48.168988  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 73/120
	I0819 19:21:49.170773  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 74/120
	I0819 19:21:50.173108  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 75/120
	I0819 19:21:51.174564  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 76/120
	I0819 19:21:52.175966  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 77/120
	I0819 19:21:53.177452  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 78/120
	I0819 19:21:54.178803  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 79/120
	I0819 19:21:55.181058  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 80/120
	I0819 19:21:56.182558  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 81/120
	I0819 19:21:57.183884  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 82/120
	I0819 19:21:58.185314  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 83/120
	I0819 19:21:59.186640  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 84/120
	I0819 19:22:00.188589  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 85/120
	I0819 19:22:01.190145  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 86/120
	I0819 19:22:02.191369  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 87/120
	I0819 19:22:03.193500  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 88/120
	I0819 19:22:04.195748  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 89/120
	I0819 19:22:05.197351  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 90/120
	I0819 19:22:06.199807  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 91/120
	I0819 19:22:07.201120  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 92/120
	I0819 19:22:08.203267  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 93/120
	I0819 19:22:09.204671  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 94/120
	I0819 19:22:10.206624  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 95/120
	I0819 19:22:11.208066  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 96/120
	I0819 19:22:12.209522  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 97/120
	I0819 19:22:13.210852  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 98/120
	I0819 19:22:14.212250  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 99/120
	I0819 19:22:15.214695  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 100/120
	I0819 19:22:16.216052  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 101/120
	I0819 19:22:17.217680  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 102/120
	I0819 19:22:18.219958  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 103/120
	I0819 19:22:19.221474  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 104/120
	I0819 19:22:20.223825  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 105/120
	I0819 19:22:21.225346  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 106/120
	I0819 19:22:22.227846  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 107/120
	I0819 19:22:23.229475  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 108/120
	I0819 19:22:24.231086  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 109/120
	I0819 19:22:25.233337  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 110/120
	I0819 19:22:26.235619  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 111/120
	I0819 19:22:27.237338  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 112/120
	I0819 19:22:28.238594  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 113/120
	I0819 19:22:29.240094  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 114/120
	I0819 19:22:30.242285  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 115/120
	I0819 19:22:31.243602  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 116/120
	I0819 19:22:32.245093  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 117/120
	I0819 19:22:33.246855  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 118/120
	I0819 19:22:34.248271  460674 main.go:141] libmachine: (ha-163902-m04) Waiting for machine to stop 119/120
	I0819 19:22:35.248816  460674 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 19:22:35.248880  460674 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0819 19:22:35.250945  460674 out.go:201] 
	W0819 19:22:35.252344  460674 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0819 19:22:35.252361  460674 out.go:270] * 
	* 
	W0819 19:22:35.254745  460674 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 19:22:35.255917  460674 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-163902 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr: exit status 3 (19.020519056s)

                                                
                                                
-- stdout --
	ha-163902
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163902-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:22:35.304306  461088 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:22:35.304457  461088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:22:35.304482  461088 out.go:358] Setting ErrFile to fd 2...
	I0819 19:22:35.304487  461088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:22:35.304674  461088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:22:35.304898  461088 out.go:352] Setting JSON to false
	I0819 19:22:35.304935  461088 mustload.go:65] Loading cluster: ha-163902
	I0819 19:22:35.305082  461088 notify.go:220] Checking for updates...
	I0819 19:22:35.305502  461088 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:22:35.305524  461088 status.go:255] checking status of ha-163902 ...
	I0819 19:22:35.306000  461088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:22:35.306057  461088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:22:35.325967  461088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41153
	I0819 19:22:35.326481  461088 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:22:35.327146  461088 main.go:141] libmachine: Using API Version  1
	I0819 19:22:35.327180  461088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:22:35.327596  461088 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:22:35.327868  461088 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:22:35.329721  461088 status.go:330] ha-163902 host status = "Running" (err=<nil>)
	I0819 19:22:35.329742  461088 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:22:35.330069  461088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:22:35.330109  461088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:22:35.345310  461088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34845
	I0819 19:22:35.345816  461088 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:22:35.346323  461088 main.go:141] libmachine: Using API Version  1
	I0819 19:22:35.346344  461088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:22:35.346882  461088 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:22:35.347094  461088 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:22:35.350256  461088 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:22:35.350814  461088 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:22:35.350851  461088 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:22:35.351035  461088 host.go:66] Checking if "ha-163902" exists ...
	I0819 19:22:35.351375  461088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:22:35.351427  461088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:22:35.369214  461088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36375
	I0819 19:22:35.369652  461088 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:22:35.370218  461088 main.go:141] libmachine: Using API Version  1
	I0819 19:22:35.370253  461088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:22:35.370637  461088 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:22:35.370848  461088 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:22:35.371047  461088 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:22:35.371085  461088 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:22:35.374195  461088 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:22:35.374687  461088 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:22:35.374714  461088 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:22:35.374929  461088 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:22:35.375126  461088 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:22:35.375279  461088 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:22:35.375429  461088 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:22:35.460945  461088 ssh_runner.go:195] Run: systemctl --version
	I0819 19:22:35.467147  461088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:22:35.484587  461088 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:22:35.484631  461088 api_server.go:166] Checking apiserver status ...
	I0819 19:22:35.484670  461088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:22:35.499840  461088 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4942/cgroup
	W0819 19:22:35.510891  461088 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4942/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:22:35.511003  461088 ssh_runner.go:195] Run: ls
	I0819 19:22:35.519694  461088 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:22:35.525927  461088 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:22:35.525958  461088 status.go:422] ha-163902 apiserver status = Running (err=<nil>)
	I0819 19:22:35.525969  461088 status.go:257] ha-163902 status: &{Name:ha-163902 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:22:35.525988  461088 status.go:255] checking status of ha-163902-m02 ...
	I0819 19:22:35.526286  461088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:22:35.526315  461088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:22:35.542163  461088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43607
	I0819 19:22:35.542666  461088 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:22:35.543130  461088 main.go:141] libmachine: Using API Version  1
	I0819 19:22:35.543153  461088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:22:35.543449  461088 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:22:35.543642  461088 main.go:141] libmachine: (ha-163902-m02) Calling .GetState
	I0819 19:22:35.545361  461088 status.go:330] ha-163902-m02 host status = "Running" (err=<nil>)
	I0819 19:22:35.545384  461088 host.go:66] Checking if "ha-163902-m02" exists ...
	I0819 19:22:35.545681  461088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:22:35.545706  461088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:22:35.561074  461088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32827
	I0819 19:22:35.561546  461088 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:22:35.562019  461088 main.go:141] libmachine: Using API Version  1
	I0819 19:22:35.562039  461088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:22:35.562397  461088 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:22:35.562569  461088 main.go:141] libmachine: (ha-163902-m02) Calling .GetIP
	I0819 19:22:35.565591  461088 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:22:35.566082  461088 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:17:25 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:22:35.566108  461088 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:22:35.566336  461088 host.go:66] Checking if "ha-163902-m02" exists ...
	I0819 19:22:35.566757  461088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:22:35.566790  461088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:22:35.582317  461088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44965
	I0819 19:22:35.582847  461088 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:22:35.583328  461088 main.go:141] libmachine: Using API Version  1
	I0819 19:22:35.583347  461088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:22:35.583708  461088 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:22:35.583925  461088 main.go:141] libmachine: (ha-163902-m02) Calling .DriverName
	I0819 19:22:35.584125  461088 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:22:35.584152  461088 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHHostname
	I0819 19:22:35.587128  461088 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:22:35.587671  461088 main.go:141] libmachine: (ha-163902-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f5:c9", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:17:25 +0000 UTC Type:0 Mac:52:54:00:92:f5:c9 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-163902-m02 Clientid:01:52:54:00:92:f5:c9}
	I0819 19:22:35.587693  461088 main.go:141] libmachine: (ha-163902-m02) DBG | domain ha-163902-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:92:f5:c9 in network mk-ha-163902
	I0819 19:22:35.587930  461088 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHPort
	I0819 19:22:35.588172  461088 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHKeyPath
	I0819 19:22:35.588330  461088 main.go:141] libmachine: (ha-163902-m02) Calling .GetSSHUsername
	I0819 19:22:35.588491  461088 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m02/id_rsa Username:docker}
	I0819 19:22:35.673407  461088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:22:35.691238  461088 kubeconfig.go:125] found "ha-163902" server: "https://192.168.39.254:8443"
	I0819 19:22:35.691270  461088 api_server.go:166] Checking apiserver status ...
	I0819 19:22:35.691312  461088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:22:35.713482  461088 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1585/cgroup
	W0819 19:22:35.725005  461088 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1585/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:22:35.725076  461088 ssh_runner.go:195] Run: ls
	I0819 19:22:35.730184  461088 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 19:22:35.735203  461088 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 19:22:35.735232  461088 status.go:422] ha-163902-m02 apiserver status = Running (err=<nil>)
	I0819 19:22:35.735242  461088 status.go:257] ha-163902-m02 status: &{Name:ha-163902-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:22:35.735259  461088 status.go:255] checking status of ha-163902-m04 ...
	I0819 19:22:35.735561  461088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:22:35.735587  461088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:22:35.750942  461088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33277
	I0819 19:22:35.751474  461088 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:22:35.751982  461088 main.go:141] libmachine: Using API Version  1
	I0819 19:22:35.752008  461088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:22:35.752353  461088 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:22:35.752624  461088 main.go:141] libmachine: (ha-163902-m04) Calling .GetState
	I0819 19:22:35.754331  461088 status.go:330] ha-163902-m04 host status = "Running" (err=<nil>)
	I0819 19:22:35.754349  461088 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:22:35.754700  461088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:22:35.754742  461088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:22:35.770160  461088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43713
	I0819 19:22:35.770708  461088 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:22:35.771294  461088 main.go:141] libmachine: Using API Version  1
	I0819 19:22:35.771322  461088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:22:35.771714  461088 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:22:35.771926  461088 main.go:141] libmachine: (ha-163902-m04) Calling .GetIP
	I0819 19:22:35.775338  461088 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:22:35.775850  461088 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:20:03 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:22:35.775881  461088 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:22:35.776053  461088 host.go:66] Checking if "ha-163902-m04" exists ...
	I0819 19:22:35.776371  461088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:22:35.776429  461088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:22:35.792597  461088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I0819 19:22:35.793110  461088 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:22:35.793684  461088 main.go:141] libmachine: Using API Version  1
	I0819 19:22:35.793705  461088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:22:35.794027  461088 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:22:35.794267  461088 main.go:141] libmachine: (ha-163902-m04) Calling .DriverName
	I0819 19:22:35.794441  461088 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:22:35.794459  461088 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHHostname
	I0819 19:22:35.796986  461088 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:22:35.797426  461088 main.go:141] libmachine: (ha-163902-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:e1:30", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:20:03 +0000 UTC Type:0 Mac:52:54:00:6f:e1:30 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-163902-m04 Clientid:01:52:54:00:6f:e1:30}
	I0819 19:22:35.797455  461088 main.go:141] libmachine: (ha-163902-m04) DBG | domain ha-163902-m04 has defined IP address 192.168.39.130 and MAC address 52:54:00:6f:e1:30 in network mk-ha-163902
	I0819 19:22:35.797633  461088 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHPort
	I0819 19:22:35.797849  461088 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHKeyPath
	I0819 19:22:35.798005  461088 main.go:141] libmachine: (ha-163902-m04) Calling .GetSSHUsername
	I0819 19:22:35.798134  461088 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902-m04/id_rsa Username:docker}
	W0819 19:22:54.277356  461088 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.130:22: connect: no route to host
	W0819 19:22:54.277492  461088 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.130:22: connect: no route to host
	E0819 19:22:54.277517  461088 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.130:22: connect: no route to host
	I0819 19:22:54.277529  461088 status.go:257] ha-163902-m04 status: &{Name:ha-163902-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0819 19:22:54.277564  461088 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.130:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-163902 -n ha-163902
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-163902 logs -n 25: (1.618800129s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-163902 ssh -n ha-163902-m02 sudo cat                                          | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m03_ha-163902-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m03:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04:/home/docker/cp-test_ha-163902-m03_ha-163902-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902-m04 sudo cat                                          | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m03_ha-163902-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-163902 cp testdata/cp-test.txt                                                | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4137015802/001/cp-test_ha-163902-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902:/home/docker/cp-test_ha-163902-m04_ha-163902.txt                       |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902 sudo cat                                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m04_ha-163902.txt                                 |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m02:/home/docker/cp-test_ha-163902-m04_ha-163902-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902-m02 sudo cat                                          | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m04_ha-163902-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m03:/home/docker/cp-test_ha-163902-m04_ha-163902-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n                                                                 | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | ha-163902-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-163902 ssh -n ha-163902-m03 sudo cat                                          | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC | 19 Aug 24 19:10 UTC |
	|         | /home/docker/cp-test_ha-163902-m04_ha-163902-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-163902 node stop m02 -v=7                                                     | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-163902 node start m02 -v=7                                                    | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-163902 -v=7                                                           | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-163902 -v=7                                                                | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-163902 --wait=true -v=7                                                    | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:15 UTC | 19 Aug 24 19:20 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-163902                                                                | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:20 UTC |                     |
	| node    | ha-163902 node delete m03 -v=7                                                   | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:20 UTC | 19 Aug 24 19:20 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-163902 stop -v=7                                                              | ha-163902 | jenkins | v1.33.1 | 19 Aug 24 19:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:15:32
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:15:32.304805  458310 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:15:32.305062  458310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:15:32.305072  458310 out.go:358] Setting ErrFile to fd 2...
	I0819 19:15:32.305076  458310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:15:32.305305  458310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:15:32.305908  458310 out.go:352] Setting JSON to false
	I0819 19:15:32.306988  458310 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10683,"bootTime":1724084249,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:15:32.307061  458310 start.go:139] virtualization: kvm guest
	I0819 19:15:32.309407  458310 out.go:177] * [ha-163902] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:15:32.310880  458310 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 19:15:32.310891  458310 notify.go:220] Checking for updates...
	I0819 19:15:32.313865  458310 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:15:32.315049  458310 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:15:32.316135  458310 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:15:32.317260  458310 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:15:32.318531  458310 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:15:32.320243  458310 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:15:32.320365  458310 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 19:15:32.320851  458310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:15:32.320904  458310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:15:32.336532  458310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
	I0819 19:15:32.337046  458310 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:15:32.337740  458310 main.go:141] libmachine: Using API Version  1
	I0819 19:15:32.337772  458310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:15:32.338221  458310 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:15:32.338460  458310 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:15:32.378793  458310 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 19:15:32.380027  458310 start.go:297] selected driver: kvm2
	I0819 19:15:32.380058  458310 start.go:901] validating driver "kvm2" against &{Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.59 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:15:32.380244  458310 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:15:32.380660  458310 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:15:32.380759  458310 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:15:32.398071  458310 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:15:32.398969  458310 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:15:32.399026  458310 cni.go:84] Creating CNI manager for ""
	I0819 19:15:32.399032  458310 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 19:15:32.399103  458310 start.go:340] cluster config:
	{Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.59 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:15:32.399282  458310 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:15:32.402103  458310 out.go:177] * Starting "ha-163902" primary control-plane node in "ha-163902" cluster
	I0819 19:15:32.403305  458310 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:15:32.403368  458310 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:15:32.403387  458310 cache.go:56] Caching tarball of preloaded images
	I0819 19:15:32.403510  458310 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:15:32.403534  458310 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 19:15:32.403668  458310 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/config.json ...
	I0819 19:15:32.403918  458310 start.go:360] acquireMachinesLock for ha-163902: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:15:32.403973  458310 start.go:364] duration metric: took 32.19µs to acquireMachinesLock for "ha-163902"
	I0819 19:15:32.403989  458310 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:15:32.403994  458310 fix.go:54] fixHost starting: 
	I0819 19:15:32.404246  458310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:15:32.404276  458310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:15:32.419758  458310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34149
	I0819 19:15:32.420308  458310 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:15:32.420895  458310 main.go:141] libmachine: Using API Version  1
	I0819 19:15:32.420925  458310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:15:32.421339  458310 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:15:32.421561  458310 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:15:32.421753  458310 main.go:141] libmachine: (ha-163902) Calling .GetState
	I0819 19:15:32.423555  458310 fix.go:112] recreateIfNeeded on ha-163902: state=Running err=<nil>
	W0819 19:15:32.423580  458310 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:15:32.425714  458310 out.go:177] * Updating the running kvm2 "ha-163902" VM ...
	I0819 19:15:32.427087  458310 machine.go:93] provisionDockerMachine start ...
	I0819 19:15:32.427117  458310 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:15:32.427503  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:15:32.430552  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.431221  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:15:32.431255  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.431496  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:15:32.431746  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:15:32.431920  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:15:32.432042  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:15:32.432228  458310 main.go:141] libmachine: Using SSH client type: native
	I0819 19:15:32.432430  458310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:15:32.432444  458310 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:15:32.537930  458310 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-163902
	
	I0819 19:15:32.537960  458310 main.go:141] libmachine: (ha-163902) Calling .GetMachineName
	I0819 19:15:32.538287  458310 buildroot.go:166] provisioning hostname "ha-163902"
	I0819 19:15:32.538314  458310 main.go:141] libmachine: (ha-163902) Calling .GetMachineName
	I0819 19:15:32.538547  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:15:32.541554  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.541994  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:15:32.542023  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.542257  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:15:32.542473  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:15:32.542651  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:15:32.542854  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:15:32.543057  458310 main.go:141] libmachine: Using SSH client type: native
	I0819 19:15:32.543273  458310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:15:32.543289  458310 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-163902 && echo "ha-163902" | sudo tee /etc/hostname
	I0819 19:15:32.666907  458310 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-163902
	
	I0819 19:15:32.666955  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:15:32.669696  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.670010  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:15:32.670042  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.670206  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:15:32.670396  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:15:32.670613  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:15:32.670751  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:15:32.670948  458310 main.go:141] libmachine: Using SSH client type: native
	I0819 19:15:32.671172  458310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:15:32.671190  458310 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-163902' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-163902/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-163902' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:15:32.778233  458310 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:15:32.778269  458310 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 19:15:32.778324  458310 buildroot.go:174] setting up certificates
	I0819 19:15:32.778338  458310 provision.go:84] configureAuth start
	I0819 19:15:32.778353  458310 main.go:141] libmachine: (ha-163902) Calling .GetMachineName
	I0819 19:15:32.778705  458310 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:15:32.781189  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.781587  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:15:32.781620  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.781750  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:15:32.784099  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.784506  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:15:32.784536  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.784683  458310 provision.go:143] copyHostCerts
	I0819 19:15:32.784714  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:15:32.784755  458310 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 19:15:32.784779  458310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:15:32.784863  458310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 19:15:32.784964  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:15:32.784988  458310 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 19:15:32.784997  458310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:15:32.785035  458310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 19:15:32.785091  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:15:32.785115  458310 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 19:15:32.785124  458310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:15:32.785179  458310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 19:15:32.785245  458310 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.ha-163902 san=[127.0.0.1 192.168.39.227 ha-163902 localhost minikube]
	I0819 19:15:32.881608  458310 provision.go:177] copyRemoteCerts
	I0819 19:15:32.881678  458310 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:15:32.881705  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:15:32.884812  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.885289  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:15:32.885325  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:32.885589  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:15:32.885840  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:15:32.886075  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:15:32.886282  458310 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:15:32.969041  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 19:15:32.969182  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:15:33.003032  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 19:15:33.003129  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0819 19:15:33.032035  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 19:15:33.032111  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:15:33.058659  458310 provision.go:87] duration metric: took 280.307466ms to configureAuth
	I0819 19:15:33.058694  458310 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:15:33.058919  458310 config.go:182] Loaded profile config "ha-163902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:15:33.058998  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:15:33.061861  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:33.062295  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:15:33.062318  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:15:33.062560  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:15:33.062813  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:15:33.062985  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:15:33.063134  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:15:33.063278  458310 main.go:141] libmachine: Using SSH client type: native
	I0819 19:15:33.063471  458310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:15:33.063499  458310 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:17:04.000215  458310 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:17:04.000269  458310 machine.go:96] duration metric: took 1m31.573162564s to provisionDockerMachine
	I0819 19:17:04.000287  458310 start.go:293] postStartSetup for "ha-163902" (driver="kvm2")
	I0819 19:17:04.000308  458310 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:17:04.000333  458310 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:17:04.000730  458310 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:17:04.000762  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:17:04.003953  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.004486  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:17:04.004517  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.004701  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:17:04.004908  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:17:04.005045  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:17:04.005182  458310 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:17:04.088269  458310 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:17:04.092949  458310 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:17:04.092980  458310 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 19:17:04.093058  458310 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 19:17:04.093157  458310 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 19:17:04.093168  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /etc/ssl/certs/4381592.pem
	I0819 19:17:04.093265  458310 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:17:04.103192  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:17:04.127346  458310 start.go:296] duration metric: took 127.041833ms for postStartSetup
	I0819 19:17:04.127402  458310 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:17:04.127817  458310 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 19:17:04.127848  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:17:04.130645  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.131135  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:17:04.131190  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.131414  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:17:04.131650  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:17:04.131821  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:17:04.131987  458310 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	W0819 19:17:04.211896  458310 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0819 19:17:04.211928  458310 fix.go:56] duration metric: took 1m31.807933704s for fixHost
	I0819 19:17:04.211952  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:17:04.214743  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.215117  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:17:04.215162  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.215379  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:17:04.215616  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:17:04.215798  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:17:04.215944  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:17:04.216114  458310 main.go:141] libmachine: Using SSH client type: native
	I0819 19:17:04.216297  458310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0819 19:17:04.216307  458310 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:17:04.318130  458310 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724095024.265455939
	
	I0819 19:17:04.318164  458310 fix.go:216] guest clock: 1724095024.265455939
	I0819 19:17:04.318173  458310 fix.go:229] Guest: 2024-08-19 19:17:04.265455939 +0000 UTC Remote: 2024-08-19 19:17:04.211936554 +0000 UTC m=+91.949434439 (delta=53.519385ms)
	I0819 19:17:04.318195  458310 fix.go:200] guest clock delta is within tolerance: 53.519385ms
	I0819 19:17:04.318203  458310 start.go:83] releasing machines lock for "ha-163902", held for 1m31.91421829s
	I0819 19:17:04.318228  458310 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:17:04.318506  458310 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:17:04.321421  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.321855  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:17:04.321882  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.322061  458310 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:17:04.322666  458310 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:17:04.322856  458310 main.go:141] libmachine: (ha-163902) Calling .DriverName
	I0819 19:17:04.322954  458310 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:17:04.323028  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:17:04.323062  458310 ssh_runner.go:195] Run: cat /version.json
	I0819 19:17:04.323082  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHHostname
	I0819 19:17:04.325353  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.325633  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:17:04.325659  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.325682  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.325865  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:17:04.326065  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:17:04.326178  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:17:04.326186  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:17:04.326202  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:04.326325  458310 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:17:04.326380  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHPort
	I0819 19:17:04.326532  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHKeyPath
	I0819 19:17:04.326689  458310 main.go:141] libmachine: (ha-163902) Calling .GetSSHUsername
	I0819 19:17:04.326837  458310 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/ha-163902/id_rsa Username:docker}
	I0819 19:17:04.428691  458310 ssh_runner.go:195] Run: systemctl --version
	I0819 19:17:04.434889  458310 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:17:04.590668  458310 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:17:04.596793  458310 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:17:04.596883  458310 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:17:04.606251  458310 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 19:17:04.606280  458310 start.go:495] detecting cgroup driver to use...
	I0819 19:17:04.606357  458310 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:17:04.626815  458310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:17:04.646511  458310 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:17:04.646583  458310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:17:04.667636  458310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:17:04.682111  458310 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:17:04.847240  458310 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:17:04.993719  458310 docker.go:233] disabling docker service ...
	I0819 19:17:04.993810  458310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:17:05.011211  458310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:17:05.026695  458310 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:17:05.179580  458310 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:17:05.325039  458310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:17:05.339477  458310 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:17:05.358706  458310 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:17:05.358784  458310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:17:05.369817  458310 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:17:05.369897  458310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:17:05.381082  458310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:17:05.392642  458310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:17:05.403832  458310 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:17:05.415556  458310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:17:05.427195  458310 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:17:05.438176  458310 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:17:05.449088  458310 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:17:05.459615  458310 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:17:05.470683  458310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:17:05.619095  458310 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:17:13.762513  458310 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.143368236s)
	I0819 19:17:13.762545  458310 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:17:13.762608  458310 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:17:13.767753  458310 start.go:563] Will wait 60s for crictl version
	I0819 19:17:13.767833  458310 ssh_runner.go:195] Run: which crictl
	I0819 19:17:13.771516  458310 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:17:13.803961  458310 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:17:13.804049  458310 ssh_runner.go:195] Run: crio --version
	I0819 19:17:13.832688  458310 ssh_runner.go:195] Run: crio --version
	I0819 19:17:13.862955  458310 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:17:13.864365  458310 main.go:141] libmachine: (ha-163902) Calling .GetIP
	I0819 19:17:13.866970  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:13.867374  458310 main.go:141] libmachine: (ha-163902) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:94:b4", ip: ""} in network mk-ha-163902: {Iface:virbr1 ExpiryTime:2024-08-19 20:05:45 +0000 UTC Type:0 Mac:52:54:00:57:94:b4 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-163902 Clientid:01:52:54:00:57:94:b4}
	I0819 19:17:13.867405  458310 main.go:141] libmachine: (ha-163902) DBG | domain ha-163902 has defined IP address 192.168.39.227 and MAC address 52:54:00:57:94:b4 in network mk-ha-163902
	I0819 19:17:13.867628  458310 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 19:17:13.872654  458310 kubeadm.go:883] updating cluster {Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.59 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:17:13.872877  458310 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:17:13.872942  458310 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:17:13.925087  458310 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:17:13.925112  458310 crio.go:433] Images already preloaded, skipping extraction
	I0819 19:17:13.925176  458310 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:17:13.958501  458310 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:17:13.958537  458310 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:17:13.958547  458310 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.31.0 crio true true} ...
	I0819 19:17:13.958644  458310 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-163902 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:17:13.958712  458310 ssh_runner.go:195] Run: crio config
	I0819 19:17:14.020413  458310 cni.go:84] Creating CNI manager for ""
	I0819 19:17:14.020437  458310 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 19:17:14.020449  458310 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:17:14.020477  458310 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-163902 NodeName:ha-163902 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:17:14.020632  458310 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-163902"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:17:14.020654  458310 kube-vip.go:115] generating kube-vip config ...
	I0819 19:17:14.020710  458310 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 19:17:14.032530  458310 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 19:17:14.032634  458310 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 19:17:14.032691  458310 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:17:14.043070  458310 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:17:14.043151  458310 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 19:17:14.053711  458310 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0819 19:17:14.071050  458310 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:17:14.088047  458310 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0819 19:17:14.105285  458310 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 19:17:14.124259  458310 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 19:17:14.128565  458310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:17:14.273759  458310 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:17:14.288848  458310 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902 for IP: 192.168.39.227
	I0819 19:17:14.288880  458310 certs.go:194] generating shared ca certs ...
	I0819 19:17:14.288907  458310 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:17:14.289086  458310 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 19:17:14.289154  458310 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 19:17:14.289170  458310 certs.go:256] generating profile certs ...
	I0819 19:17:14.289257  458310 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/client.key
	I0819 19:17:14.289292  458310 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.cada45d2
	I0819 19:17:14.289315  458310 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.cada45d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227 192.168.39.162 192.168.39.59 192.168.39.254]
	I0819 19:17:14.470970  458310 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.cada45d2 ...
	I0819 19:17:14.471004  458310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.cada45d2: {Name:mk97b8324aec57377fbcdea1ffa69849c0be6bfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:17:14.471173  458310 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.cada45d2 ...
	I0819 19:17:14.471185  458310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.cada45d2: {Name:mk44f37a5a74a4ac4422be5fb78ed86c85ebcf19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:17:14.471253  458310 certs.go:381] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt.cada45d2 -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt
	I0819 19:17:14.471405  458310 certs.go:385] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key.cada45d2 -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key
	I0819 19:17:14.471526  458310 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key
	I0819 19:17:14.471545  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 19:17:14.471560  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 19:17:14.471572  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 19:17:14.471581  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 19:17:14.471593  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 19:17:14.471603  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 19:17:14.471615  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 19:17:14.471624  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 19:17:14.471671  458310 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 19:17:14.471698  458310 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 19:17:14.471707  458310 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 19:17:14.471729  458310 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:17:14.471758  458310 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:17:14.471782  458310 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 19:17:14.471819  458310 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:17:14.471844  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /usr/share/ca-certificates/4381592.pem
	I0819 19:17:14.471857  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:17:14.471868  458310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem -> /usr/share/ca-certificates/438159.pem
	I0819 19:17:14.472433  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:17:14.498015  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:17:14.522790  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:17:14.547630  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 19:17:14.573874  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 19:17:14.598716  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:17:14.623610  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:17:14.649218  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/ha-163902/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:17:14.675412  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 19:17:14.700408  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:17:14.725232  458310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 19:17:14.749879  458310 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:17:14.767067  458310 ssh_runner.go:195] Run: openssl version
	I0819 19:17:14.772898  458310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:17:14.784149  458310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:17:14.788746  458310 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:17:14.788819  458310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:17:14.794862  458310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:17:14.804931  458310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 19:17:14.816021  458310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 19:17:14.820802  458310 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 19:17:14.820865  458310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 19:17:14.826698  458310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 19:17:14.836637  458310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 19:17:14.847774  458310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 19:17:14.852627  458310 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 19:17:14.852698  458310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 19:17:14.858315  458310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:17:14.867927  458310 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:17:14.872905  458310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:17:14.878680  458310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:17:14.884545  458310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:17:14.890288  458310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:17:14.896616  458310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:17:14.902627  458310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:17:14.908236  458310 kubeadm.go:392] StartCluster: {Name:ha-163902 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-163902 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.59 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:17:14.908372  458310 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:17:14.908421  458310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:17:14.945183  458310 cri.go:89] found id: "4cc8ba41d8cdd81b9ab345470fb3e91b985359c81d50e13b5389284e1e6a3b8c"
	I0819 19:17:14.945206  458310 cri.go:89] found id: "87bc6b08ac735cbe640bfc9921c1ff87a6eca1047a9c4e40b3efcc4fa384a480"
	I0819 19:17:14.945209  458310 cri.go:89] found id: "5cfb4b337ff7bdc8c86acefbd6abfbfdc390e5b523892c04a98267b224398180"
	I0819 19:17:14.945213  458310 cri.go:89] found id: "259a75894a0e7bb2cdc9cd2b3363c853666a4c083456ff5d18e290b19f68e62d"
	I0819 19:17:14.945215  458310 cri.go:89] found id: "920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5"
	I0819 19:17:14.945218  458310 cri.go:89] found id: "e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816"
	I0819 19:17:14.945223  458310 cri.go:89] found id: "2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2"
	I0819 19:17:14.945225  458310 cri.go:89] found id: "db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd"
	I0819 19:17:14.945228  458310 cri.go:89] found id: "4f34db6fe664bceaf0e9d708d1611192c9077289166c5e6eb41e36c67d759f40"
	I0819 19:17:14.945234  458310 cri.go:89] found id: "4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca"
	I0819 19:17:14.945236  458310 cri.go:89] found id: "63a9dbc3e9af7e2076b8b04c548b2a1cb5c385838357a95e7bc2a86709324ae5"
	I0819 19:17:14.945239  458310 cri.go:89] found id: "8fca5e9aea9309a38211d91fdbe50e67041a24d8dc547a7cff8edefeb4c57ae6"
	I0819 19:17:14.945241  458310 cri.go:89] found id: "d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732"
	I0819 19:17:14.945244  458310 cri.go:89] found id: ""
	I0819 19:17:14.945292  458310 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 19:22:54 ha-163902 crio[3608]: time="2024-08-19 19:22:54.882716806Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095374882648077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c10a0b0e-a678-4186-a87b-8ab7265ba568 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:22:54 ha-163902 crio[3608]: time="2024-08-19 19:22:54.884759289Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=61ab134c-a62d-4de3-b220-2d462839a5d2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:22:54 ha-163902 crio[3608]: time="2024-08-19 19:22:54.884883848Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=61ab134c-a62d-4de3-b220-2d462839a5d2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:22:54 ha-163902 crio[3608]: time="2024-08-19 19:22:54.885487934Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0bca30016c03bcea841c86b727ac0060d8f65be32595f91b8d45aa0580826225,PodSandboxId:f175d67a855adfa2b53b141c24c239cd7f081798ea618bdea4472a291ecc0657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724095123374479770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994f082e7404b797402b183967d58a813905219b02a9f4a28db44e4196488b94,PodSandboxId:4636cb498c3d303813e14d219ffeaefb55ab4fab1886e92046b3e30fc1b4813a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724095116374514664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cd1f97eb38ac02d43f6ffcc43488e9af5b265e9f756b1d495ed3349c40726e7,PodSandboxId:f468939e669cd7542215698c4edf02f3ee86cc55024a9551743844812ce76e80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724095085376542835,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cb2b38899974af1867564b1152197f3906ab54765c23bcd44911a7a5ca28be5,PodSandboxId:f175d67a855adfa2b53b141c24c239cd7f081798ea618bdea4472a291ecc0657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724095080373407284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d70c14a96546fb6ec5ddf99807b73aef57277826acf7fee28e5624d2558fbf26,PodSandboxId:721b9413072006da57e99aefe4690db2a48263d82b0060a81e596026a2178cf1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724095074759489851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9313e58051c6a0409400c754736819f17e02fce650219c44b2b173742cea39f1,PodSandboxId:4636cb498c3d303813e14d219ffeaefb55ab4fab1886e92046b3e30fc1b4813a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724095073830784091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca3a34d99a8c33c15214512a1c85be8b0abdf6c4fc44721cd0b7910eb4dd026,PodSandboxId:a37935e6223fb69ff8606b4a898ccde537b356d908309ec97d0b8e2bd5829bf1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724095055913094790,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5810f2181e3f187cb6813890f6c942,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e83d9b3a38071761ae1db6a7e164216729a64bc610a8238c5e743cc55f334dd,PodSandboxId:24644424fc1bde1f3bfbe432c3a5224b717420da7cf876655deb987c5833aab7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724095041364930383,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:554de3a8cfd0306b14e037d2df2736c671fafe4053cda737537da92371c6a2d5,PodSandboxId:c160ca374efd41f181dec40cf7fd25bcc18dd0fa99b07f7e5ac5ecd3c375cf2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095041417550271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b29ce7ae2e5b57b8b7dd3fb18e375f5fb5fa32dd1c57ae7361d52ac21e4317f0,PodSandboxId:68e9795b79bd4ad7374c90dfc587d0982769d6efd09c19cb96acd612e18a01ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095041488423892,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-992a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1062663de6014874208d7e918cf4e228d79e7643a88797435170333e7c0a68,PodSandboxId:713514c53fd20982de9e5e25da9d0234ffc6ae16655b57d8cfb57210f6129c1c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724095041321130204,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163
902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c41e6ee62e55f65d9881f0a06abf0a05e7082c63e2db15f2d042c6b777eab00,PodSandboxId:330b4df5bbcc5a559549d8d5ba3be8e3d1fd3d85138c81046458b39d43fc8bfa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724095041278040198,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2126230b10e8f0116efe5b0c00eb69b8334aced694d463f9bd0dfac35e3fb55a,PodSandboxId:f468939e669cd7542215698c4edf02f3ee86cc55024a9551743844812ce76e80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724095041171347214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3be8d692f7f209eaa8f4c8738aae27df1a1518dff4906b9e43314dcc8cf9114,PodSandboxId:0520ed277638a65149668786945baac3abdc0bf6ab5fbe431edc0e278ded02fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724095041066385729,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Anno
tations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02444059f768b434bec5a2fd4b0d4fbb732968f540047c4aa7b8a99a1c65bb7d,PodSandboxId:eb7a960ca621f28de1cf7df7ee989ec445634c3424342a6e021968e7cdf42a07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724094532962277862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5,PodSandboxId:ccb6b229e5b0f48fa137a962df49ea8e4097412f9c8e6af3d688045b8f853f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094393388258222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816,PodSandboxId:17befe587bdb85618dca3e7976f660e17302ba9f247c3ccf2ac0ac81f4a9b659,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094393363347208,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-992a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2,PodSandboxId:10a016c587c22e7d86b0df8374fb970bc55d97e30756c8f1221e21e974ef8796,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724094381626774299,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd,PodSandboxId:5f1f61689816115acade7df9764d44551ec4f4c0166b634e72db9ca6205df7f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724094378012686594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca,PodSandboxId:644e4a4ea97f11366a62fe5805da67ba956c5f041ae16a8cd9bbb066ce4e0622,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724094367193567094,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732,PodSandboxId:8f73fd805b78d3b87e7ca93e3d523d088e6dd941a647e215524cab0b016ee42e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724094367106237335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=61ab134c-a62d-4de3-b220-2d462839a5d2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:22:54 ha-163902 crio[3608]: time="2024-08-19 19:22:54.927714318Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c1c4b46e-9754-4118-9239-8986d8760e9f name=/runtime.v1.RuntimeService/Version
	Aug 19 19:22:54 ha-163902 crio[3608]: time="2024-08-19 19:22:54.927793390Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c1c4b46e-9754-4118-9239-8986d8760e9f name=/runtime.v1.RuntimeService/Version
	Aug 19 19:22:54 ha-163902 crio[3608]: time="2024-08-19 19:22:54.928906179Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=410a2cdf-4fc4-4c1b-b050-979c8c98e41b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:22:54 ha-163902 crio[3608]: time="2024-08-19 19:22:54.929492949Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095374929468259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=410a2cdf-4fc4-4c1b-b050-979c8c98e41b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:22:54 ha-163902 crio[3608]: time="2024-08-19 19:22:54.930286955Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28fb8944-6c30-48e8-82c8-86d352d93e41 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:22:54 ha-163902 crio[3608]: time="2024-08-19 19:22:54.930357358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28fb8944-6c30-48e8-82c8-86d352d93e41 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:22:54 ha-163902 crio[3608]: time="2024-08-19 19:22:54.931316174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0bca30016c03bcea841c86b727ac0060d8f65be32595f91b8d45aa0580826225,PodSandboxId:f175d67a855adfa2b53b141c24c239cd7f081798ea618bdea4472a291ecc0657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724095123374479770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994f082e7404b797402b183967d58a813905219b02a9f4a28db44e4196488b94,PodSandboxId:4636cb498c3d303813e14d219ffeaefb55ab4fab1886e92046b3e30fc1b4813a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724095116374514664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cd1f97eb38ac02d43f6ffcc43488e9af5b265e9f756b1d495ed3349c40726e7,PodSandboxId:f468939e669cd7542215698c4edf02f3ee86cc55024a9551743844812ce76e80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724095085376542835,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cb2b38899974af1867564b1152197f3906ab54765c23bcd44911a7a5ca28be5,PodSandboxId:f175d67a855adfa2b53b141c24c239cd7f081798ea618bdea4472a291ecc0657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724095080373407284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d70c14a96546fb6ec5ddf99807b73aef57277826acf7fee28e5624d2558fbf26,PodSandboxId:721b9413072006da57e99aefe4690db2a48263d82b0060a81e596026a2178cf1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724095074759489851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9313e58051c6a0409400c754736819f17e02fce650219c44b2b173742cea39f1,PodSandboxId:4636cb498c3d303813e14d219ffeaefb55ab4fab1886e92046b3e30fc1b4813a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724095073830784091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca3a34d99a8c33c15214512a1c85be8b0abdf6c4fc44721cd0b7910eb4dd026,PodSandboxId:a37935e6223fb69ff8606b4a898ccde537b356d908309ec97d0b8e2bd5829bf1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724095055913094790,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5810f2181e3f187cb6813890f6c942,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e83d9b3a38071761ae1db6a7e164216729a64bc610a8238c5e743cc55f334dd,PodSandboxId:24644424fc1bde1f3bfbe432c3a5224b717420da7cf876655deb987c5833aab7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724095041364930383,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:554de3a8cfd0306b14e037d2df2736c671fafe4053cda737537da92371c6a2d5,PodSandboxId:c160ca374efd41f181dec40cf7fd25bcc18dd0fa99b07f7e5ac5ecd3c375cf2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095041417550271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b29ce7ae2e5b57b8b7dd3fb18e375f5fb5fa32dd1c57ae7361d52ac21e4317f0,PodSandboxId:68e9795b79bd4ad7374c90dfc587d0982769d6efd09c19cb96acd612e18a01ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095041488423892,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-992a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1062663de6014874208d7e918cf4e228d79e7643a88797435170333e7c0a68,PodSandboxId:713514c53fd20982de9e5e25da9d0234ffc6ae16655b57d8cfb57210f6129c1c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724095041321130204,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163
902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c41e6ee62e55f65d9881f0a06abf0a05e7082c63e2db15f2d042c6b777eab00,PodSandboxId:330b4df5bbcc5a559549d8d5ba3be8e3d1fd3d85138c81046458b39d43fc8bfa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724095041278040198,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2126230b10e8f0116efe5b0c00eb69b8334aced694d463f9bd0dfac35e3fb55a,PodSandboxId:f468939e669cd7542215698c4edf02f3ee86cc55024a9551743844812ce76e80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724095041171347214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3be8d692f7f209eaa8f4c8738aae27df1a1518dff4906b9e43314dcc8cf9114,PodSandboxId:0520ed277638a65149668786945baac3abdc0bf6ab5fbe431edc0e278ded02fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724095041066385729,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Anno
tations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02444059f768b434bec5a2fd4b0d4fbb732968f540047c4aa7b8a99a1c65bb7d,PodSandboxId:eb7a960ca621f28de1cf7df7ee989ec445634c3424342a6e021968e7cdf42a07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724094532962277862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5,PodSandboxId:ccb6b229e5b0f48fa137a962df49ea8e4097412f9c8e6af3d688045b8f853f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094393388258222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816,PodSandboxId:17befe587bdb85618dca3e7976f660e17302ba9f247c3ccf2ac0ac81f4a9b659,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094393363347208,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-992a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2,PodSandboxId:10a016c587c22e7d86b0df8374fb970bc55d97e30756c8f1221e21e974ef8796,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724094381626774299,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd,PodSandboxId:5f1f61689816115acade7df9764d44551ec4f4c0166b634e72db9ca6205df7f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724094378012686594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca,PodSandboxId:644e4a4ea97f11366a62fe5805da67ba956c5f041ae16a8cd9bbb066ce4e0622,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724094367193567094,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732,PodSandboxId:8f73fd805b78d3b87e7ca93e3d523d088e6dd941a647e215524cab0b016ee42e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724094367106237335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28fb8944-6c30-48e8-82c8-86d352d93e41 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:22:54 ha-163902 crio[3608]: time="2024-08-19 19:22:54.978655274Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a80a77d5-2068-4d55-8214-a7c275565eb0 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:22:54 ha-163902 crio[3608]: time="2024-08-19 19:22:54.978744380Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a80a77d5-2068-4d55-8214-a7c275565eb0 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:22:54 ha-163902 crio[3608]: time="2024-08-19 19:22:54.980399564Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=850a26b1-a9a2-4a88-a951-3402de315f92 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:22:54 ha-163902 crio[3608]: time="2024-08-19 19:22:54.980847512Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095374980822378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=850a26b1-a9a2-4a88-a951-3402de315f92 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:22:54 ha-163902 crio[3608]: time="2024-08-19 19:22:54.981453418Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2724e3e-e919-4b6e-826a-3e61d8baf1fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:22:54 ha-163902 crio[3608]: time="2024-08-19 19:22:54.981514598Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2724e3e-e919-4b6e-826a-3e61d8baf1fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:22:54 ha-163902 crio[3608]: time="2024-08-19 19:22:54.981920026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0bca30016c03bcea841c86b727ac0060d8f65be32595f91b8d45aa0580826225,PodSandboxId:f175d67a855adfa2b53b141c24c239cd7f081798ea618bdea4472a291ecc0657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724095123374479770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994f082e7404b797402b183967d58a813905219b02a9f4a28db44e4196488b94,PodSandboxId:4636cb498c3d303813e14d219ffeaefb55ab4fab1886e92046b3e30fc1b4813a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724095116374514664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cd1f97eb38ac02d43f6ffcc43488e9af5b265e9f756b1d495ed3349c40726e7,PodSandboxId:f468939e669cd7542215698c4edf02f3ee86cc55024a9551743844812ce76e80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724095085376542835,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cb2b38899974af1867564b1152197f3906ab54765c23bcd44911a7a5ca28be5,PodSandboxId:f175d67a855adfa2b53b141c24c239cd7f081798ea618bdea4472a291ecc0657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724095080373407284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d70c14a96546fb6ec5ddf99807b73aef57277826acf7fee28e5624d2558fbf26,PodSandboxId:721b9413072006da57e99aefe4690db2a48263d82b0060a81e596026a2178cf1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724095074759489851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9313e58051c6a0409400c754736819f17e02fce650219c44b2b173742cea39f1,PodSandboxId:4636cb498c3d303813e14d219ffeaefb55ab4fab1886e92046b3e30fc1b4813a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724095073830784091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca3a34d99a8c33c15214512a1c85be8b0abdf6c4fc44721cd0b7910eb4dd026,PodSandboxId:a37935e6223fb69ff8606b4a898ccde537b356d908309ec97d0b8e2bd5829bf1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724095055913094790,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5810f2181e3f187cb6813890f6c942,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e83d9b3a38071761ae1db6a7e164216729a64bc610a8238c5e743cc55f334dd,PodSandboxId:24644424fc1bde1f3bfbe432c3a5224b717420da7cf876655deb987c5833aab7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724095041364930383,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:554de3a8cfd0306b14e037d2df2736c671fafe4053cda737537da92371c6a2d5,PodSandboxId:c160ca374efd41f181dec40cf7fd25bcc18dd0fa99b07f7e5ac5ecd3c375cf2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095041417550271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b29ce7ae2e5b57b8b7dd3fb18e375f5fb5fa32dd1c57ae7361d52ac21e4317f0,PodSandboxId:68e9795b79bd4ad7374c90dfc587d0982769d6efd09c19cb96acd612e18a01ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095041488423892,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-992a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1062663de6014874208d7e918cf4e228d79e7643a88797435170333e7c0a68,PodSandboxId:713514c53fd20982de9e5e25da9d0234ffc6ae16655b57d8cfb57210f6129c1c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724095041321130204,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163
902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c41e6ee62e55f65d9881f0a06abf0a05e7082c63e2db15f2d042c6b777eab00,PodSandboxId:330b4df5bbcc5a559549d8d5ba3be8e3d1fd3d85138c81046458b39d43fc8bfa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724095041278040198,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2126230b10e8f0116efe5b0c00eb69b8334aced694d463f9bd0dfac35e3fb55a,PodSandboxId:f468939e669cd7542215698c4edf02f3ee86cc55024a9551743844812ce76e80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724095041171347214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3be8d692f7f209eaa8f4c8738aae27df1a1518dff4906b9e43314dcc8cf9114,PodSandboxId:0520ed277638a65149668786945baac3abdc0bf6ab5fbe431edc0e278ded02fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724095041066385729,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Anno
tations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02444059f768b434bec5a2fd4b0d4fbb732968f540047c4aa7b8a99a1c65bb7d,PodSandboxId:eb7a960ca621f28de1cf7df7ee989ec445634c3424342a6e021968e7cdf42a07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724094532962277862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5,PodSandboxId:ccb6b229e5b0f48fa137a962df49ea8e4097412f9c8e6af3d688045b8f853f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094393388258222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816,PodSandboxId:17befe587bdb85618dca3e7976f660e17302ba9f247c3ccf2ac0ac81f4a9b659,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094393363347208,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-992a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2,PodSandboxId:10a016c587c22e7d86b0df8374fb970bc55d97e30756c8f1221e21e974ef8796,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724094381626774299,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd,PodSandboxId:5f1f61689816115acade7df9764d44551ec4f4c0166b634e72db9ca6205df7f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724094378012686594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca,PodSandboxId:644e4a4ea97f11366a62fe5805da67ba956c5f041ae16a8cd9bbb066ce4e0622,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724094367193567094,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732,PodSandboxId:8f73fd805b78d3b87e7ca93e3d523d088e6dd941a647e215524cab0b016ee42e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724094367106237335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2724e3e-e919-4b6e-826a-3e61d8baf1fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:22:55 ha-163902 crio[3608]: time="2024-08-19 19:22:55.024215978Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f880f435-8cd8-43d9-8ce1-fad4845fd5fc name=/runtime.v1.RuntimeService/Version
	Aug 19 19:22:55 ha-163902 crio[3608]: time="2024-08-19 19:22:55.024302775Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f880f435-8cd8-43d9-8ce1-fad4845fd5fc name=/runtime.v1.RuntimeService/Version
	Aug 19 19:22:55 ha-163902 crio[3608]: time="2024-08-19 19:22:55.025209248Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eec5e0bb-f60a-4a89-8f54-4c1207bd53eb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:22:55 ha-163902 crio[3608]: time="2024-08-19 19:22:55.025639656Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095375025616849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eec5e0bb-f60a-4a89-8f54-4c1207bd53eb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:22:55 ha-163902 crio[3608]: time="2024-08-19 19:22:55.026329006Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=721b4a17-25e5-40b6-810e-b6c94d3d3760 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:22:55 ha-163902 crio[3608]: time="2024-08-19 19:22:55.026406706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=721b4a17-25e5-40b6-810e-b6c94d3d3760 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:22:55 ha-163902 crio[3608]: time="2024-08-19 19:22:55.026820441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0bca30016c03bcea841c86b727ac0060d8f65be32595f91b8d45aa0580826225,PodSandboxId:f175d67a855adfa2b53b141c24c239cd7f081798ea618bdea4472a291ecc0657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724095123374479770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994f082e7404b797402b183967d58a813905219b02a9f4a28db44e4196488b94,PodSandboxId:4636cb498c3d303813e14d219ffeaefb55ab4fab1886e92046b3e30fc1b4813a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724095116374514664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cd1f97eb38ac02d43f6ffcc43488e9af5b265e9f756b1d495ed3349c40726e7,PodSandboxId:f468939e669cd7542215698c4edf02f3ee86cc55024a9551743844812ce76e80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724095085376542835,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cb2b38899974af1867564b1152197f3906ab54765c23bcd44911a7a5ca28be5,PodSandboxId:f175d67a855adfa2b53b141c24c239cd7f081798ea618bdea4472a291ecc0657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724095080373407284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05dffa5a-3372-4a79-94ad-33d14a4b7fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d70c14a96546fb6ec5ddf99807b73aef57277826acf7fee28e5624d2558fbf26,PodSandboxId:721b9413072006da57e99aefe4690db2a48263d82b0060a81e596026a2178cf1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724095074759489851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9313e58051c6a0409400c754736819f17e02fce650219c44b2b173742cea39f1,PodSandboxId:4636cb498c3d303813e14d219ffeaefb55ab4fab1886e92046b3e30fc1b4813a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724095073830784091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8224888fe20eec2559ea452ef643d9,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca3a34d99a8c33c15214512a1c85be8b0abdf6c4fc44721cd0b7910eb4dd026,PodSandboxId:a37935e6223fb69ff8606b4a898ccde537b356d908309ec97d0b8e2bd5829bf1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724095055913094790,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5810f2181e3f187cb6813890f6c942,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e83d9b3a38071761ae1db6a7e164216729a64bc610a8238c5e743cc55f334dd,PodSandboxId:24644424fc1bde1f3bfbe432c3a5224b717420da7cf876655deb987c5833aab7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724095041364930383,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:554de3a8cfd0306b14e037d2df2736c671fafe4053cda737537da92371c6a2d5,PodSandboxId:c160ca374efd41f181dec40cf7fd25bcc18dd0fa99b07f7e5ac5ecd3c375cf2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095041417550271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b29ce7ae2e5b57b8b7dd3fb18e375f5fb5fa32dd1c57ae7361d52ac21e4317f0,PodSandboxId:68e9795b79bd4ad7374c90dfc587d0982769d6efd09c19cb96acd612e18a01ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095041488423892,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-992a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1062663de6014874208d7e918cf4e228d79e7643a88797435170333e7c0a68,PodSandboxId:713514c53fd20982de9e5e25da9d0234ffc6ae16655b57d8cfb57210f6129c1c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724095041321130204,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163
902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c41e6ee62e55f65d9881f0a06abf0a05e7082c63e2db15f2d042c6b777eab00,PodSandboxId:330b4df5bbcc5a559549d8d5ba3be8e3d1fd3d85138c81046458b39d43fc8bfa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724095041278040198,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2126230b10e8f0116efe5b0c00eb69b8334aced694d463f9bd0dfac35e3fb55a,PodSandboxId:f468939e669cd7542215698c4edf02f3ee86cc55024a9551743844812ce76e80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724095041171347214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 8ece5b6fb61fc1df2c5e3bb729fbf8fe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3be8d692f7f209eaa8f4c8738aae27df1a1518dff4906b9e43314dcc8cf9114,PodSandboxId:0520ed277638a65149668786945baac3abdc0bf6ab5fbe431edc0e278ded02fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724095041066385729,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Anno
tations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02444059f768b434bec5a2fd4b0d4fbb732968f540047c4aa7b8a99a1c65bb7d,PodSandboxId:eb7a960ca621f28de1cf7df7ee989ec445634c3424342a6e021968e7cdf42a07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724094532962277862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vlrsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c7250fd-18ad-4bb4-86e6-2d8d2fff0960,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5,PodSandboxId:ccb6b229e5b0f48fa137a962df49ea8e4097412f9c8e6af3d688045b8f853f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094393388258222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nkths,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6dfcbf1-0cb2-4a92-b18f-75bf375a36a2,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816,PodSandboxId:17befe587bdb85618dca3e7976f660e17302ba9f247c3ccf2ac0ac81f4a9b659,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094393363347208,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wmp8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2ee9c4-992a-4251-a717-9843b7b41894,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2,PodSandboxId:10a016c587c22e7d86b0df8374fb970bc55d97e30756c8f1221e21e974ef8796,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724094381626774299,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpwjl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 624275c2-a670-4cc0-a11c-70f3e1b78946,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd,PodSandboxId:5f1f61689816115acade7df9764d44551ec4f4c0166b634e72db9ca6205df7f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724094378012686594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wxrsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d78c5e8-eed2-4da5-9425-76f96e2d8ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca,PodSandboxId:644e4a4ea97f11366a62fe5805da67ba956c5f041ae16a8cd9bbb066ce4e0622,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724094367193567094,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0bb2ead65417bad04a5a9744a4d45,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732,PodSandboxId:8f73fd805b78d3b87e7ca93e3d523d088e6dd941a647e215524cab0b016ee42e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724094367106237335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-163902,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99cb7ce5acc21c12e8bde640aeed5142,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=721b4a17-25e5-40b6-810e-b6c94d3d3760 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0bca30016c03b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   f175d67a855ad       storage-provisioner
	994f082e7404b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   3                   4636cb498c3d3       kube-controller-manager-ha-163902
	2cd1f97eb38ac       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            3                   f468939e669cd       kube-apiserver-ha-163902
	9cb2b38899974       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   f175d67a855ad       storage-provisioner
	d70c14a96546f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   721b941307200       busybox-7dff88458-vlrsr
	9313e58051c6a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      5 minutes ago       Exited              kube-controller-manager   2                   4636cb498c3d3       kube-controller-manager-ha-163902
	1ca3a34d99a8c       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   a37935e6223fb       kube-vip-ha-163902
	b29ce7ae2e5b5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   68e9795b79bd4       coredns-6f6b679f8f-wmp8k
	554de3a8cfd03       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   c160ca374efd4       coredns-6f6b679f8f-nkths
	6e83d9b3a3807       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      5 minutes ago       Running             kube-proxy                1                   24644424fc1bd       kube-proxy-wxrsv
	ed1062663de60       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      5 minutes ago       Running             kube-scheduler            1                   713514c53fd20       kube-scheduler-ha-163902
	7c41e6ee62e55       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   330b4df5bbcc5       kindnet-bpwjl
	2126230b10e8f       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      5 minutes ago       Exited              kube-apiserver            2                   f468939e669cd       kube-apiserver-ha-163902
	a3be8d692f7f2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   0520ed277638a       etcd-ha-163902
	02444059f768b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   14 minutes ago      Exited              busybox                   0                   eb7a960ca621f       busybox-7dff88458-vlrsr
	920809b3fb8b5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   ccb6b229e5b0f       coredns-6f6b679f8f-nkths
	e3292ee2a24df       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   17befe587bdb8       coredns-6f6b679f8f-wmp8k
	2bde6d659e1cd       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    16 minutes ago      Exited              kindnet-cni               0                   10a016c587c22       kindnet-bpwjl
	db4dd64341a0f       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      16 minutes ago      Exited              kube-proxy                0                   5f1f616898161       kube-proxy-wxrsv
	4b31ffd467824       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      16 minutes ago      Exited              kube-scheduler            0                   644e4a4ea97f1       kube-scheduler-ha-163902
	d7785bd28970f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   8f73fd805b78d       etcd-ha-163902
	
	
	==> coredns [554de3a8cfd0306b14e037d2df2736c671fafe4053cda737537da92371c6a2d5] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58120->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[17187443]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 19:17:33.074) (total time: 13299ms):
	Trace[17187443]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58120->10.96.0.1:443: read: connection reset by peer 13298ms (19:17:46.373)
	Trace[17187443]: [13.299070301s] [13.299070301s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58120->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [920809b3fb8b51344cf1c21cec1e0db829734c1b415c0eaba0757941d1d4cbb5] <==
	[INFO] 10.244.1.2:51524 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001508794s
	[INFO] 10.244.1.2:44203 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105366s
	[INFO] 10.244.1.2:39145 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000196935s
	[INFO] 10.244.1.2:53804 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000174817s
	[INFO] 10.244.0.4:38242 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152582s
	[INFO] 10.244.0.4:50866 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00178155s
	[INFO] 10.244.0.4:41459 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077648s
	[INFO] 10.244.0.4:52991 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001294022s
	[INFO] 10.244.0.4:49760 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077772s
	[INFO] 10.244.2.2:52036 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000184006s
	[INFO] 10.244.2.2:42639 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000139597s
	[INFO] 10.244.1.2:45707 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157857s
	[INFO] 10.244.1.2:55541 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079589s
	[INFO] 10.244.0.4:39107 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114365s
	[INFO] 10.244.0.4:42814 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075113s
	[INFO] 10.244.1.2:45907 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164052s
	[INFO] 10.244.1.2:50977 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168617s
	[INFO] 10.244.1.2:55449 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000213337s
	[INFO] 10.244.1.2:36556 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110937s
	[INFO] 10.244.0.4:58486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000301321s
	[INFO] 10.244.0.4:59114 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075318s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b29ce7ae2e5b57b8b7dd3fb18e375f5fb5fa32dd1c57ae7361d52ac21e4317f0] <==
	Trace[1485927947]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (19:17:36.296)
	Trace[1485927947]: [10.001013657s] [10.001013657s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:56650->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:56650->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:42040->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:42040->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e3292ee2a24df7bd103e8efbf0f44605709e0c87e301002c2c4e3a79bade0816] <==
	[INFO] 10.244.2.2:54418 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140774s
	[INFO] 10.244.2.2:59184 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158893s
	[INFO] 10.244.2.2:53883 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149814s
	[INFO] 10.244.2.2:35674 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136715s
	[INFO] 10.244.0.4:42875 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138512s
	[INFO] 10.244.0.4:58237 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102142s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1864&timeout=8m44s&timeoutSeconds=524&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1868&timeout=9m27s&timeoutSeconds=567&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1868": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1868": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1864": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1864": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1469055367]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 19:15:19.188) (total time: 12251ms):
	Trace[1469055367]: ---"Objects listed" error:Unauthorized 12251ms (19:15:31.439)
	Trace[1469055367]: [12.251295543s] [12.251295543s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[135777212]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 19:15:19.388) (total time: 12054ms):
	Trace[135777212]: ---"Objects listed" error:Unauthorized 12050ms (19:15:31.439)
	Trace[135777212]: [12.054409384s] [12.054409384s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-163902
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-163902
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=ha-163902
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T19_06_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:06:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-163902
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:22:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:18:03 +0000   Mon, 19 Aug 2024 19:06:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:18:03 +0000   Mon, 19 Aug 2024 19:06:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:18:03 +0000   Mon, 19 Aug 2024 19:06:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:18:03 +0000   Mon, 19 Aug 2024 19:06:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-163902
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d3b52f7c3a144ec8d3a6e98276775f3
	  System UUID:                4d3b52f7-c3a1-44ec-8d3a-6e98276775f3
	  Boot ID:                    26bff1c8-7a07-4ad4-9634-fcbc547b5a26
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vlrsr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-6f6b679f8f-nkths             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-6f6b679f8f-wmp8k             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-163902                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-bpwjl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-163902             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-163902    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-wxrsv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-163902             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-163902                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m49s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-163902 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-163902 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-163902 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           16m                    node-controller  Node ha-163902 event: Registered Node ha-163902 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-163902 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-163902 event: Registered Node ha-163902 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-163902 event: Registered Node ha-163902 in Controller
	  Normal   NodeNotReady             5m43s (x4 over 6m57s)  kubelet          Node ha-163902 status is now: NodeNotReady
	  Warning  ContainerGCFailed        5m42s (x2 over 6m42s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m54s                  node-controller  Node ha-163902 event: Registered Node ha-163902 in Controller
	  Normal   RegisteredNode           4m17s                  node-controller  Node ha-163902 event: Registered Node ha-163902 in Controller
	  Normal   RegisteredNode           3m15s                  node-controller  Node ha-163902 event: Registered Node ha-163902 in Controller
	
	
	Name:               ha-163902-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-163902-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=ha-163902
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T19_07_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:07:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-163902-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:22:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:18:47 +0000   Mon, 19 Aug 2024 19:18:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:18:47 +0000   Mon, 19 Aug 2024 19:18:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:18:47 +0000   Mon, 19 Aug 2024 19:18:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:18:47 +0000   Mon, 19 Aug 2024 19:18:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.162
	  Hostname:    ha-163902-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4ebc4d6f40f47d9854129310dcf34d7
	  System UUID:                d4ebc4d6-f40f-47d9-8541-29310dcf34d7
	  Boot ID:                    f7a7580f-2ef2-42cf-8d82-41ab8ac2dfab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9zj57                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ha-163902-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-97cnn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-163902-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-163902-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-4whvs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-163902-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-163902-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m32s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-163902-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-163902-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-163902-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-163902-m02 event: Registered Node ha-163902-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-163902-m02 event: Registered Node ha-163902-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-163902-m02 event: Registered Node ha-163902-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-163902-m02 status is now: NodeNotReady
	  Normal  Starting                 5m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m17s (x8 over 5m17s)  kubelet          Node ha-163902-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m17s (x8 over 5m17s)  kubelet          Node ha-163902-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m17s (x7 over 5m17s)  kubelet          Node ha-163902-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m54s                  node-controller  Node ha-163902-m02 event: Registered Node ha-163902-m02 in Controller
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-163902-m02 event: Registered Node ha-163902-m02 in Controller
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-163902-m02 event: Registered Node ha-163902-m02 in Controller
	
	
	Name:               ha-163902-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-163902-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=ha-163902
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T19_09_27_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:09:26 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-163902-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:20:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 19:20:08 +0000   Mon, 19 Aug 2024 19:21:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 19:20:08 +0000   Mon, 19 Aug 2024 19:21:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 19:20:08 +0000   Mon, 19 Aug 2024 19:21:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 19:20:08 +0000   Mon, 19 Aug 2024 19:21:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.130
	  Hostname:    ha-163902-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d771c9152e0748dca0ecbcee5197aaea
	  System UUID:                d771c915-2e07-48dc-a0ec-bcee5197aaea
	  Boot ID:                    705d7db3-a682-4049-90f8-73fb3118ff6b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nlmnn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-plbmk              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-proxy-9b77p           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-163902-m04 event: Registered Node ha-163902-m04 in Controller
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-163902-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-163902-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-163902-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-163902-m04 event: Registered Node ha-163902-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-163902-m04 event: Registered Node ha-163902-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-163902-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m54s                  node-controller  Node ha-163902-m04 event: Registered Node ha-163902-m04 in Controller
	  Normal   RegisteredNode           4m17s                  node-controller  Node ha-163902-m04 event: Registered Node ha-163902-m04 in Controller
	  Normal   NodeNotReady             4m14s                  node-controller  Node ha-163902-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m15s                  node-controller  Node ha-163902-m04 event: Registered Node ha-163902-m04 in Controller
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-163902-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-163902-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-163902-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-163902-m04 has been rebooted, boot id: 705d7db3-a682-4049-90f8-73fb3118ff6b
	  Normal   NodeReady                2m47s                  kubelet          Node ha-163902-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s                   node-controller  Node ha-163902-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.061246] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063525] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.198071] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.117006] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.272735] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[Aug19 19:06] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +3.660968] systemd-fstab-generator[894]: Ignoring "noauto" option for root device
	[  +0.062148] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.174652] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.082985] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.372328] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.696903] kauditd_printk_skb: 23 callbacks suppressed
	[Aug19 19:07] kauditd_printk_skb: 26 callbacks suppressed
	[Aug19 19:17] systemd-fstab-generator[3521]: Ignoring "noauto" option for root device
	[  +0.148321] systemd-fstab-generator[3533]: Ignoring "noauto" option for root device
	[  +0.185110] systemd-fstab-generator[3547]: Ignoring "noauto" option for root device
	[  +0.148360] systemd-fstab-generator[3559]: Ignoring "noauto" option for root device
	[  +0.291698] systemd-fstab-generator[3587]: Ignoring "noauto" option for root device
	[  +8.652850] systemd-fstab-generator[3694]: Ignoring "noauto" option for root device
	[  +0.086368] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.548221] kauditd_printk_skb: 12 callbacks suppressed
	[ +12.096677] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.054784] kauditd_printk_skb: 1 callbacks suppressed
	[Aug19 19:18] kauditd_printk_skb: 5 callbacks suppressed
	[ +13.066788] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [a3be8d692f7f209eaa8f4c8738aae27df1a1518dff4906b9e43314dcc8cf9114] <==
	{"level":"info","ts":"2024-08-19T19:19:29.543810Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:19:29.544363Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:19:29.546235Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:19:29.554931Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"bcb2eab2b5d0a9fc","to":"8a4d37127f98560a","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-19T19:19:29.555001Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:19:29.559526Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"bcb2eab2b5d0a9fc","to":"8a4d37127f98560a","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-19T19:19:29.559585Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:20:21.588550Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bcb2eab2b5d0a9fc switched to configuration voters=(13597188278260378108 14318781806715485285)"}
	{"level":"info","ts":"2024-08-19T19:20:21.593668Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"a9051c714e34311b","local-member-id":"bcb2eab2b5d0a9fc","removed-remote-peer-id":"8a4d37127f98560a","removed-remote-peer-urls":["https://192.168.39.59:2380"]}
	{"level":"info","ts":"2024-08-19T19:20:21.593736Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8a4d37127f98560a"}
	{"level":"warn","ts":"2024-08-19T19:20:21.593885Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"bcb2eab2b5d0a9fc","removed-member-id":"8a4d37127f98560a"}
	{"level":"warn","ts":"2024-08-19T19:20:21.594025Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:20:21.594060Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8a4d37127f98560a"}
	{"level":"warn","ts":"2024-08-19T19:20:21.594038Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-08-19T19:20:21.594933Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:20:21.594993Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:20:21.595210Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a"}
	{"level":"warn","ts":"2024-08-19T19:20:21.595543Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a","error":"context canceled"}
	{"level":"warn","ts":"2024-08-19T19:20:21.595585Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"8a4d37127f98560a","error":"failed to read 8a4d37127f98560a on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-19T19:20:21.595619Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a"}
	{"level":"warn","ts":"2024-08-19T19:20:21.595821Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a","error":"context canceled"}
	{"level":"info","ts":"2024-08-19T19:20:21.595858Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:20:21.595879Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:20:21.595892Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"bcb2eab2b5d0a9fc","removed-remote-peer-id":"8a4d37127f98560a"}
	{"level":"warn","ts":"2024-08-19T19:20:21.612632Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id-stream-handler":"bcb2eab2b5d0a9fc","remote-peer-id-from":"8a4d37127f98560a"}
	
	
	==> etcd [d7785bd28970f73a71fca1e56f35899193b47ff2a9fac56041c837e64ab2d732] <==
	{"level":"info","ts":"2024-08-19T19:15:33.182011Z","caller":"traceutil/trace.go:171","msg":"trace[791016477] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; }","duration":"744.789077ms","start":"2024-08-19T19:15:32.437218Z","end":"2024-08-19T19:15:33.182007Z","steps":["trace[791016477] 'agreement among raft nodes before linearized reading'  (duration: 744.768877ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:15:33.182055Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T19:15:32.437208Z","time spent":"744.84079ms","remote":"127.0.0.1:51878","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:10000 "}
	2024/08/19 19:15:33 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-08-19T19:15:33.251934Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"bcb2eab2b5d0a9fc","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-19T19:15:33.252124Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c6b68830659a3c65"}
	{"level":"info","ts":"2024-08-19T19:15:33.252188Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c6b68830659a3c65"}
	{"level":"info","ts":"2024-08-19T19:15:33.252226Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c6b68830659a3c65"}
	{"level":"info","ts":"2024-08-19T19:15:33.252324Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65"}
	{"level":"info","ts":"2024-08-19T19:15:33.252399Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65"}
	{"level":"info","ts":"2024-08-19T19:15:33.252493Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"c6b68830659a3c65"}
	{"level":"info","ts":"2024-08-19T19:15:33.252558Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c6b68830659a3c65"}
	{"level":"info","ts":"2024-08-19T19:15:33.252566Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:15:33.252576Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:15:33.252592Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:15:33.252712Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:15:33.252800Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:15:33.252906Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:15:33.252962Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8a4d37127f98560a"}
	{"level":"info","ts":"2024-08-19T19:15:33.255864Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.227:2380"}
	{"level":"warn","ts":"2024-08-19T19:15:33.255959Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.816197001s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-19T19:15:33.255998Z","caller":"traceutil/trace.go:171","msg":"trace[419324862] range","detail":"{range_begin:; range_end:; }","duration":"1.816252837s","start":"2024-08-19T19:15:31.439735Z","end":"2024-08-19T19:15:33.255988Z","steps":["trace[419324862] 'agreement among raft nodes before linearized reading'  (duration: 1.816196874s)"],"step_count":1}
	{"level":"error","ts":"2024-08-19T19:15:33.256054Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-19T19:15:33.256168Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.227:2380"}
	{"level":"info","ts":"2024-08-19T19:15:33.256237Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-163902","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.227:2380"],"advertise-client-urls":["https://192.168.39.227:2379"]}
	
	
	==> kernel <==
	 19:22:55 up 17 min,  0 users,  load average: 0.09, 0.13, 0.14
	Linux ha-163902 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2bde6d659e1cd398f96c27c4dffbf4c23ca865f307af9a9cb254bcd9e460c1e2] <==
	I0819 19:15:12.625961       1 main.go:295] Handling node with IPs: map[192.168.39.59:{}]
	I0819 19:15:12.626003       1 main.go:322] Node ha-163902-m03 has CIDR [10.244.2.0/24] 
	I0819 19:15:12.626204       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0819 19:15:12.626225       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	I0819 19:15:12.626289       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0819 19:15:12.626310       1 main.go:299] handling current node
	I0819 19:15:12.626321       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0819 19:15:12.626326       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	E0819 19:15:13.349631       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1838&timeout=7m34s&timeoutSeconds=454&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0819 19:15:22.625467       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0819 19:15:22.625570       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	I0819 19:15:22.625719       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0819 19:15:22.625753       1 main.go:299] handling current node
	I0819 19:15:22.625776       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0819 19:15:22.625792       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	I0819 19:15:22.625876       1 main.go:295] Handling node with IPs: map[192.168.39.59:{}]
	I0819 19:15:22.625896       1 main.go:322] Node ha-163902-m03 has CIDR [10.244.2.0/24] 
	I0819 19:15:32.629574       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0819 19:15:32.629620       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	I0819 19:15:32.629718       1 main.go:295] Handling node with IPs: map[192.168.39.59:{}]
	I0819 19:15:32.629724       1 main.go:322] Node ha-163902-m03 has CIDR [10.244.2.0/24] 
	I0819 19:15:32.629766       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0819 19:15:32.629784       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	I0819 19:15:32.629860       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0819 19:15:32.629891       1 main.go:299] handling current node
	
	
	==> kindnet [7c41e6ee62e55f65d9881f0a06abf0a05e7082c63e2db15f2d042c6b777eab00] <==
	I0819 19:22:12.292432       1 main.go:299] handling current node
	I0819 19:22:22.283443       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0819 19:22:22.283513       1 main.go:299] handling current node
	I0819 19:22:22.283528       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0819 19:22:22.283533       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	I0819 19:22:22.283670       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0819 19:22:22.283692       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	I0819 19:22:32.286407       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0819 19:22:32.286522       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	I0819 19:22:32.286676       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0819 19:22:32.286710       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	I0819 19:22:32.286784       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0819 19:22:32.286804       1 main.go:299] handling current node
	I0819 19:22:42.285660       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0819 19:22:42.285703       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	I0819 19:22:42.285850       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0819 19:22:42.285870       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	I0819 19:22:42.285923       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0819 19:22:42.285941       1 main.go:299] handling current node
	I0819 19:22:52.287710       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0819 19:22:52.287801       1 main.go:299] handling current node
	I0819 19:22:52.287818       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0819 19:22:52.287824       1 main.go:322] Node ha-163902-m02 has CIDR [10.244.1.0/24] 
	I0819 19:22:52.287954       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0819 19:22:52.287975       1 main.go:322] Node ha-163902-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2126230b10e8f0116efe5b0c00eb69b8334aced694d463f9bd0dfac35e3fb55a] <==
	I0819 19:17:21.898184       1 options.go:228] external host was not specified, using 192.168.39.227
	I0819 19:17:21.900480       1 server.go:142] Version: v1.31.0
	I0819 19:17:21.900520       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:17:22.458693       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0819 19:17:22.485210       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 19:17:22.497636       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 19:17:22.499346       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 19:17:22.499617       1 instance.go:232] Using reconciler: lease
	W0819 19:17:42.454095       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0819 19:17:42.454219       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0819 19:17:42.501322       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0819 19:17:42.501368       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [2cd1f97eb38ac02d43f6ffcc43488e9af5b265e9f756b1d495ed3349c40726e7] <==
	I0819 19:18:07.239217       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0819 19:18:07.239231       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0819 19:18:07.335848       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 19:18:07.336422       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 19:18:07.336524       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 19:18:07.336588       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 19:18:07.337193       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 19:18:07.342561       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 19:18:07.342685       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 19:18:07.346936       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 19:18:07.347055       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 19:18:07.347114       1 aggregator.go:171] initial CRD sync complete...
	I0819 19:18:07.347139       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 19:18:07.347223       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 19:18:07.347231       1 cache.go:39] Caches are synced for autoregister controller
	I0819 19:18:07.351496       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 19:18:07.358263       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 19:18:07.358296       1 policy_source.go:224] refreshing policies
	I0819 19:18:07.411572       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0819 19:18:07.450008       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.162 192.168.39.59]
	I0819 19:18:07.452275       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 19:18:07.466883       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0819 19:18:07.476660       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0819 19:18:08.246241       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0819 19:18:08.900973       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.162 192.168.39.227 192.168.39.59]
	
	
	==> kube-controller-manager [9313e58051c6a0409400c754736819f17e02fce650219c44b2b173742cea39f1] <==
	I0819 19:17:54.230726       1 serving.go:386] Generated self-signed cert in-memory
	I0819 19:17:54.727986       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 19:17:54.728080       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:17:54.729524       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 19:17:54.729679       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 19:17:54.729829       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 19:17:54.729905       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0819 19:18:04.732213       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.227:8443/healthz\": dial tcp 192.168.39.227:8443: connect: connection refused"
	
	
	==> kube-controller-manager [994f082e7404b797402b183967d58a813905219b02a9f4a28db44e4196488b94] <==
	I0819 19:21:08.681384       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:21:08.707221       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:21:08.759758       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.886049ms"
	I0819 19:21:08.760614       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="226.189µs"
	I0819 19:21:11.798665       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	I0819 19:21:13.856622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-163902-m04"
	E0819 19:21:18.618365       1 gc_controller.go:151] "Failed to get node" err="node \"ha-163902-m03\" not found" logger="pod-garbage-collector-controller" node="ha-163902-m03"
	E0819 19:21:18.618465       1 gc_controller.go:151] "Failed to get node" err="node \"ha-163902-m03\" not found" logger="pod-garbage-collector-controller" node="ha-163902-m03"
	E0819 19:21:18.618492       1 gc_controller.go:151] "Failed to get node" err="node \"ha-163902-m03\" not found" logger="pod-garbage-collector-controller" node="ha-163902-m03"
	E0819 19:21:18.618518       1 gc_controller.go:151] "Failed to get node" err="node \"ha-163902-m03\" not found" logger="pod-garbage-collector-controller" node="ha-163902-m03"
	E0819 19:21:18.618544       1 gc_controller.go:151] "Failed to get node" err="node \"ha-163902-m03\" not found" logger="pod-garbage-collector-controller" node="ha-163902-m03"
	I0819 19:21:18.630309       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-163902-m03"
	I0819 19:21:18.703516       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-163902-m03"
	I0819 19:21:18.703638       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-xq852"
	I0819 19:21:18.741493       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-xq852"
	I0819 19:21:18.741534       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-72q7r"
	I0819 19:21:18.771999       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-72q7r"
	I0819 19:21:18.772259       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-163902-m03"
	I0819 19:21:18.807538       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-163902-m03"
	I0819 19:21:18.807561       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-163902-m03"
	I0819 19:21:18.844738       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-163902-m03"
	I0819 19:21:18.844822       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-163902-m03"
	I0819 19:21:18.878648       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-163902-m03"
	I0819 19:21:18.878675       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-163902-m03"
	I0819 19:21:18.915799       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-163902-m03"
	
	
	==> kube-proxy [6e83d9b3a38071761ae1db6a7e164216729a64bc610a8238c5e743cc55f334dd] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 19:17:23.526566       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-163902\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 19:17:26.598736       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-163902\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 19:17:29.669970       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-163902\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 19:17:35.816367       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-163902\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 19:17:48.102247       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-163902\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0819 19:18:05.593400       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.227"]
	E0819 19:18:05.593667       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 19:18:05.650666       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 19:18:05.650721       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 19:18:05.650746       1 server_linux.go:169] "Using iptables Proxier"
	I0819 19:18:05.653133       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 19:18:05.653373       1 server.go:483] "Version info" version="v1.31.0"
	I0819 19:18:05.653397       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:18:05.654707       1 config.go:197] "Starting service config controller"
	I0819 19:18:05.654748       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 19:18:05.654770       1 config.go:104] "Starting endpoint slice config controller"
	I0819 19:18:05.654773       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 19:18:05.655382       1 config.go:326] "Starting node config controller"
	I0819 19:18:05.655410       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 19:18:05.755465       1 shared_informer.go:320] Caches are synced for node config
	I0819 19:18:05.755568       1 shared_informer.go:320] Caches are synced for service config
	I0819 19:18:05.755604       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [db4dd64341a0fca706e949967c25987318f06f1963f0ca81c0088ea296d307fd] <==
	E0819 19:14:14.725689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:14.725732       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1818": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:14.725787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1818\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:14.725862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-163902&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:14.725902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-163902&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:22.277614       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-163902&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:22.277687       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-163902&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:22.277614       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:22.277919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:22.277772       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1818": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:22.278031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1818\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:31.494696       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:31.495305       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:34.566414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-163902&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:34.566548       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-163902&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:37.639450       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1818": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:37.639625       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1818\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:52.998672       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:52.998834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:52.999004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-163902&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:52.999070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-163902&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:14:59.141687       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1818": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:14:59.142383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1818\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 19:15:26.790095       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 19:15:26.790265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [4b31ffd4678249f84d4b3a97520776f8e763f727d011d49f88f47077b2b7f0ca] <==
	W0819 19:06:11.189865       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 19:06:11.189919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:06:11.215289       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 19:06:11.215345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0819 19:06:12.630110       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 19:08:50.829572       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9zj57\": pod busybox-7dff88458-9zj57 is already assigned to node \"ha-163902-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-9zj57" node="ha-163902-m03"
	E0819 19:08:50.829705       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9zj57\": pod busybox-7dff88458-9zj57 is already assigned to node \"ha-163902-m02\"" pod="default/busybox-7dff88458-9zj57"
	E0819 19:15:18.298163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0819 19:15:19.648116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0819 19:15:20.875519       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0819 19:15:21.591797       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0819 19:15:22.224948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0819 19:15:23.768329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0819 19:15:25.832664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0819 19:15:26.080814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0819 19:15:26.347850       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0819 19:15:27.257746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0819 19:15:27.935426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0819 19:15:28.886248       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0819 19:15:29.954868       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0819 19:15:31.448251       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	I0819 19:15:33.153089       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0819 19:15:33.153214       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0819 19:15:33.153350       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0819 19:15:33.154714       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ed1062663de6014874208d7e918cf4e228d79e7643a88797435170333e7c0a68] <==
	W0819 19:17:59.007830       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.227:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:17:59.007907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.227:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:17:59.359798       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.227:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:17:59.359847       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.227:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:17:59.421944       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.227:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:17:59.422016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.227:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:17:59.996299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.227:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:17:59.996409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.227:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:18:00.434938       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.227:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:18:00.435001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.227:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:18:01.318507       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.227:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:18:01.318579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.227:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:18:01.933214       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.227:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:18:01.933290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.227:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:18:03.400243       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.227:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:18:03.400299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.227:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:18:03.864754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.227:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:18:03.864875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.227:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0819 19:18:04.540761       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.227:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0819 19:18:04.540933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.227:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	I0819 19:18:15.813955       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 19:20:18.303077       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nlmnn\": pod busybox-7dff88458-nlmnn is already assigned to node \"ha-163902-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-nlmnn" node="ha-163902-m04"
	E0819 19:20:18.303365       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6ac12160-2d62-4ea0-a4ac-3357a7ed0f6d(default/busybox-7dff88458-nlmnn) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-nlmnn"
	E0819 19:20:18.303469       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nlmnn\": pod busybox-7dff88458-nlmnn is already assigned to node \"ha-163902-m04\"" pod="default/busybox-7dff88458-nlmnn"
	I0819 19:20:18.303561       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-nlmnn" node="ha-163902-m04"
	
	
	==> kubelet <==
	Aug 19 19:21:23 ha-163902 kubelet[1316]: E0819 19:21:23.778055    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095283777715517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:21:23 ha-163902 kubelet[1316]: E0819 19:21:23.778098    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095283777715517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:21:33 ha-163902 kubelet[1316]: E0819 19:21:33.780919    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095293780327985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:21:33 ha-163902 kubelet[1316]: E0819 19:21:33.780985    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095293780327985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:21:43 ha-163902 kubelet[1316]: E0819 19:21:43.783047    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095303782688411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:21:43 ha-163902 kubelet[1316]: E0819 19:21:43.783102    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095303782688411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:21:53 ha-163902 kubelet[1316]: E0819 19:21:53.784711    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095313784414235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:21:53 ha-163902 kubelet[1316]: E0819 19:21:53.785046    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095313784414235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:22:03 ha-163902 kubelet[1316]: E0819 19:22:03.786488    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095323786064552,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:22:03 ha-163902 kubelet[1316]: E0819 19:22:03.786850    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095323786064552,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:22:13 ha-163902 kubelet[1316]: E0819 19:22:13.380853    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 19:22:13 ha-163902 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 19:22:13 ha-163902 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 19:22:13 ha-163902 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 19:22:13 ha-163902 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:22:13 ha-163902 kubelet[1316]: E0819 19:22:13.788905    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095333788577103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:22:13 ha-163902 kubelet[1316]: E0819 19:22:13.788952    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095333788577103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:22:23 ha-163902 kubelet[1316]: E0819 19:22:23.791539    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095343791129982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:22:23 ha-163902 kubelet[1316]: E0819 19:22:23.791845    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095343791129982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:22:33 ha-163902 kubelet[1316]: E0819 19:22:33.793916    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095353793408531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:22:33 ha-163902 kubelet[1316]: E0819 19:22:33.794240    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095353793408531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:22:43 ha-163902 kubelet[1316]: E0819 19:22:43.796848    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095363796306132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:22:43 ha-163902 kubelet[1316]: E0819 19:22:43.797202    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095363796306132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:22:53 ha-163902 kubelet[1316]: E0819 19:22:53.798790    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095373798445236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:22:53 ha-163902 kubelet[1316]: E0819 19:22:53.799074    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095373798445236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:22:54.608136  461256 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-430949/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-163902 -n ha-163902
helpers_test.go:261: (dbg) Run:  kubectl --context ha-163902 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (322.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-548379
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-548379
E0819 19:39:38.962260  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:39:39.472418  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:39:56.397569  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-548379: exit status 82 (2m1.791909286s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-548379-m03"  ...
	* Stopping node "multinode-548379-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-548379" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-548379 --wait=true -v=8 --alsologtostderr
E0819 19:42:42.027321  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:44:38.961228  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-548379 --wait=true -v=8 --alsologtostderr: (3m18.721909982s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-548379
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-548379 -n multinode-548379
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-548379 logs -n 25: (1.461353274s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-548379 ssh -n                                                                 | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-548379 cp multinode-548379-m02:/home/docker/cp-test.txt                       | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3755015912/001/cp-test_multinode-548379-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n                                                                 | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-548379 cp multinode-548379-m02:/home/docker/cp-test.txt                       | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379:/home/docker/cp-test_multinode-548379-m02_multinode-548379.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n                                                                 | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n multinode-548379 sudo cat                                       | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | /home/docker/cp-test_multinode-548379-m02_multinode-548379.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-548379 cp multinode-548379-m02:/home/docker/cp-test.txt                       | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m03:/home/docker/cp-test_multinode-548379-m02_multinode-548379-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n                                                                 | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n multinode-548379-m03 sudo cat                                   | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | /home/docker/cp-test_multinode-548379-m02_multinode-548379-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-548379 cp testdata/cp-test.txt                                                | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n                                                                 | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-548379 cp multinode-548379-m03:/home/docker/cp-test.txt                       | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3755015912/001/cp-test_multinode-548379-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n                                                                 | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-548379 cp multinode-548379-m03:/home/docker/cp-test.txt                       | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379:/home/docker/cp-test_multinode-548379-m03_multinode-548379.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n                                                                 | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n multinode-548379 sudo cat                                       | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | /home/docker/cp-test_multinode-548379-m03_multinode-548379.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-548379 cp multinode-548379-m03:/home/docker/cp-test.txt                       | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m02:/home/docker/cp-test_multinode-548379-m03_multinode-548379-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n                                                                 | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n multinode-548379-m02 sudo cat                                   | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | /home/docker/cp-test_multinode-548379-m03_multinode-548379-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-548379 node stop m03                                                          | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	| node    | multinode-548379 node start                                                             | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:39 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-548379                                                                | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:39 UTC |                     |
	| stop    | -p multinode-548379                                                                     | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:39 UTC |                     |
	| start   | -p multinode-548379                                                                     | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:41 UTC | 19 Aug 24 19:44 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-548379                                                                | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:44 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:41:22
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:41:22.590573  470984 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:41:22.590838  470984 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:41:22.590848  470984 out.go:358] Setting ErrFile to fd 2...
	I0819 19:41:22.590853  470984 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:41:22.591067  470984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:41:22.591638  470984 out.go:352] Setting JSON to false
	I0819 19:41:22.592662  470984 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12234,"bootTime":1724084249,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:41:22.592732  470984 start.go:139] virtualization: kvm guest
	I0819 19:41:22.595122  470984 out.go:177] * [multinode-548379] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:41:22.596694  470984 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 19:41:22.596753  470984 notify.go:220] Checking for updates...
	I0819 19:41:22.599198  470984 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:41:22.600569  470984 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:41:22.601879  470984 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:41:22.603198  470984 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:41:22.604437  470984 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:41:22.606194  470984 config.go:182] Loaded profile config "multinode-548379": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:41:22.606306  470984 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 19:41:22.606830  470984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:41:22.606921  470984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:41:22.623376  470984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34597
	I0819 19:41:22.623974  470984 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:41:22.624656  470984 main.go:141] libmachine: Using API Version  1
	I0819 19:41:22.624678  470984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:41:22.625048  470984 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:41:22.625339  470984 main.go:141] libmachine: (multinode-548379) Calling .DriverName
	I0819 19:41:22.668345  470984 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 19:41:22.669596  470984 start.go:297] selected driver: kvm2
	I0819 19:41:22.669626  470984 start.go:901] validating driver "kvm2" against &{Name:multinode-548379 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-548379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.197 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:41:22.669813  470984 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:41:22.670154  470984 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:41:22.670236  470984 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:41:22.686549  470984 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:41:22.687415  470984 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:41:22.687474  470984 cni.go:84] Creating CNI manager for ""
	I0819 19:41:22.687480  470984 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 19:41:22.687531  470984 start.go:340] cluster config:
	{Name:multinode-548379 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-548379 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.197 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:41:22.687662  470984 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:41:22.689677  470984 out.go:177] * Starting "multinode-548379" primary control-plane node in "multinode-548379" cluster
	I0819 19:41:22.690905  470984 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:41:22.690953  470984 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:41:22.690962  470984 cache.go:56] Caching tarball of preloaded images
	I0819 19:41:22.691062  470984 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:41:22.691072  470984 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 19:41:22.691184  470984 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/config.json ...
	I0819 19:41:22.691412  470984 start.go:360] acquireMachinesLock for multinode-548379: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:41:22.691460  470984 start.go:364] duration metric: took 26.32µs to acquireMachinesLock for "multinode-548379"
	I0819 19:41:22.691477  470984 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:41:22.691483  470984 fix.go:54] fixHost starting: 
	I0819 19:41:22.691797  470984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:41:22.691835  470984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:41:22.708034  470984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44695
	I0819 19:41:22.708545  470984 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:41:22.709095  470984 main.go:141] libmachine: Using API Version  1
	I0819 19:41:22.709119  470984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:41:22.709483  470984 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:41:22.709665  470984 main.go:141] libmachine: (multinode-548379) Calling .DriverName
	I0819 19:41:22.709840  470984 main.go:141] libmachine: (multinode-548379) Calling .GetState
	I0819 19:41:22.711562  470984 fix.go:112] recreateIfNeeded on multinode-548379: state=Running err=<nil>
	W0819 19:41:22.711591  470984 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:41:22.713737  470984 out.go:177] * Updating the running kvm2 "multinode-548379" VM ...
	I0819 19:41:22.715026  470984 machine.go:93] provisionDockerMachine start ...
	I0819 19:41:22.715049  470984 main.go:141] libmachine: (multinode-548379) Calling .DriverName
	I0819 19:41:22.715343  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:41:22.718200  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:22.718663  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:41:22.718686  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:22.718860  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHPort
	I0819 19:41:22.719105  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:41:22.719266  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:41:22.719411  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHUsername
	I0819 19:41:22.719568  470984 main.go:141] libmachine: Using SSH client type: native
	I0819 19:41:22.719772  470984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0819 19:41:22.719784  470984 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:41:22.833515  470984 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-548379
	
	I0819 19:41:22.833543  470984 main.go:141] libmachine: (multinode-548379) Calling .GetMachineName
	I0819 19:41:22.833831  470984 buildroot.go:166] provisioning hostname "multinode-548379"
	I0819 19:41:22.833865  470984 main.go:141] libmachine: (multinode-548379) Calling .GetMachineName
	I0819 19:41:22.834120  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:41:22.836839  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:22.837231  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:41:22.837254  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:22.837406  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHPort
	I0819 19:41:22.837607  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:41:22.837811  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:41:22.838026  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHUsername
	I0819 19:41:22.838296  470984 main.go:141] libmachine: Using SSH client type: native
	I0819 19:41:22.838510  470984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0819 19:41:22.838527  470984 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-548379 && echo "multinode-548379" | sudo tee /etc/hostname
	I0819 19:41:22.966486  470984 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-548379
	
	I0819 19:41:22.966563  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:41:22.969563  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:22.970008  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:41:22.970041  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:22.970312  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHPort
	I0819 19:41:22.970550  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:41:22.970730  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:41:22.970906  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHUsername
	I0819 19:41:22.971077  470984 main.go:141] libmachine: Using SSH client type: native
	I0819 19:41:22.971297  470984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0819 19:41:22.971315  470984 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-548379' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-548379/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-548379' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:41:23.090060  470984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:41:23.090089  470984 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 19:41:23.090127  470984 buildroot.go:174] setting up certificates
	I0819 19:41:23.090136  470984 provision.go:84] configureAuth start
	I0819 19:41:23.090146  470984 main.go:141] libmachine: (multinode-548379) Calling .GetMachineName
	I0819 19:41:23.090463  470984 main.go:141] libmachine: (multinode-548379) Calling .GetIP
	I0819 19:41:23.093075  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:23.093503  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:41:23.093537  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:23.093753  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:41:23.096249  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:23.096593  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:41:23.096619  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:23.096785  470984 provision.go:143] copyHostCerts
	I0819 19:41:23.096819  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:41:23.096852  470984 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 19:41:23.096872  470984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:41:23.096939  470984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 19:41:23.097017  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:41:23.097036  470984 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 19:41:23.097046  470984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:41:23.097082  470984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 19:41:23.097175  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:41:23.097201  470984 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 19:41:23.097210  470984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:41:23.097237  470984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 19:41:23.097293  470984 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.multinode-548379 san=[127.0.0.1 192.168.39.35 localhost minikube multinode-548379]
	I0819 19:41:23.310230  470984 provision.go:177] copyRemoteCerts
	I0819 19:41:23.310295  470984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:41:23.310322  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:41:23.313449  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:23.313855  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:41:23.313888  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:23.314090  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHPort
	I0819 19:41:23.314278  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:41:23.314433  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHUsername
	I0819 19:41:23.314617  470984 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/multinode-548379/id_rsa Username:docker}
	I0819 19:41:23.404288  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 19:41:23.404374  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:41:23.429517  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 19:41:23.429621  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0819 19:41:23.453725  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 19:41:23.453812  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 19:41:23.478690  470984 provision.go:87] duration metric: took 388.538189ms to configureAuth
	I0819 19:41:23.478727  470984 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:41:23.478956  470984 config.go:182] Loaded profile config "multinode-548379": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:41:23.479037  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:41:23.481940  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:23.482282  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:41:23.482315  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:23.482488  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHPort
	I0819 19:41:23.482699  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:41:23.482907  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:41:23.483029  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHUsername
	I0819 19:41:23.483207  470984 main.go:141] libmachine: Using SSH client type: native
	I0819 19:41:23.483374  470984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0819 19:41:23.483391  470984 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:42:54.405518  470984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:42:54.405611  470984 machine.go:96] duration metric: took 1m31.690557871s to provisionDockerMachine
	I0819 19:42:54.405630  470984 start.go:293] postStartSetup for "multinode-548379" (driver="kvm2")
	I0819 19:42:54.405664  470984 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:42:54.405711  470984 main.go:141] libmachine: (multinode-548379) Calling .DriverName
	I0819 19:42:54.406100  470984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:42:54.406135  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:42:54.409906  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.410334  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:42:54.410368  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.410559  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHPort
	I0819 19:42:54.410808  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:42:54.410995  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHUsername
	I0819 19:42:54.411194  470984 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/multinode-548379/id_rsa Username:docker}
	I0819 19:42:54.500522  470984 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:42:54.505031  470984 command_runner.go:130] > NAME=Buildroot
	I0819 19:42:54.505061  470984 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 19:42:54.505066  470984 command_runner.go:130] > ID=buildroot
	I0819 19:42:54.505071  470984 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 19:42:54.505077  470984 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 19:42:54.505119  470984 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:42:54.505154  470984 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 19:42:54.505248  470984 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 19:42:54.505348  470984 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 19:42:54.505361  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /etc/ssl/certs/4381592.pem
	I0819 19:42:54.505451  470984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:42:54.517458  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:42:54.542052  470984 start.go:296] duration metric: took 136.406058ms for postStartSetup
	I0819 19:42:54.542105  470984 fix.go:56] duration metric: took 1m31.850621637s for fixHost
	I0819 19:42:54.542133  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:42:54.544873  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.545246  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:42:54.545275  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.545534  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHPort
	I0819 19:42:54.545781  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:42:54.545964  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:42:54.546102  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHUsername
	I0819 19:42:54.546284  470984 main.go:141] libmachine: Using SSH client type: native
	I0819 19:42:54.546449  470984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0819 19:42:54.546464  470984 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:42:54.662147  470984 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724096574.625059009
	
	I0819 19:42:54.662180  470984 fix.go:216] guest clock: 1724096574.625059009
	I0819 19:42:54.662188  470984 fix.go:229] Guest: 2024-08-19 19:42:54.625059009 +0000 UTC Remote: 2024-08-19 19:42:54.542111305 +0000 UTC m=+91.990512322 (delta=82.947704ms)
	I0819 19:42:54.662211  470984 fix.go:200] guest clock delta is within tolerance: 82.947704ms
	I0819 19:42:54.662218  470984 start.go:83] releasing machines lock for "multinode-548379", held for 1m31.970746689s
	I0819 19:42:54.662242  470984 main.go:141] libmachine: (multinode-548379) Calling .DriverName
	I0819 19:42:54.662518  470984 main.go:141] libmachine: (multinode-548379) Calling .GetIP
	I0819 19:42:54.665051  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.665508  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:42:54.665533  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.665712  470984 main.go:141] libmachine: (multinode-548379) Calling .DriverName
	I0819 19:42:54.666224  470984 main.go:141] libmachine: (multinode-548379) Calling .DriverName
	I0819 19:42:54.666412  470984 main.go:141] libmachine: (multinode-548379) Calling .DriverName
	I0819 19:42:54.666511  470984 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:42:54.666568  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:42:54.666634  470984 ssh_runner.go:195] Run: cat /version.json
	I0819 19:42:54.666660  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:42:54.669268  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.669299  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.669838  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:42:54.669871  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.669896  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:42:54.669916  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.670081  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHPort
	I0819 19:42:54.670154  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHPort
	I0819 19:42:54.670299  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:42:54.670302  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:42:54.670480  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHUsername
	I0819 19:42:54.670488  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHUsername
	I0819 19:42:54.670664  470984 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/multinode-548379/id_rsa Username:docker}
	I0819 19:42:54.670670  470984 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/multinode-548379/id_rsa Username:docker}
	I0819 19:42:54.772527  470984 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 19:42:54.772595  470984 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 19:42:54.772792  470984 ssh_runner.go:195] Run: systemctl --version
	I0819 19:42:54.778783  470984 command_runner.go:130] > systemd 252 (252)
	I0819 19:42:54.778847  470984 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 19:42:54.778980  470984 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:42:54.937197  470984 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 19:42:54.942902  470984 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 19:42:54.943015  470984 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:42:54.943081  470984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:42:54.953684  470984 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 19:42:54.953721  470984 start.go:495] detecting cgroup driver to use...
	I0819 19:42:54.953803  470984 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:42:54.973245  470984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:42:54.988294  470984 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:42:54.988366  470984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:42:55.002771  470984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:42:55.017059  470984 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:42:55.182200  470984 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:42:55.340634  470984 docker.go:233] disabling docker service ...
	I0819 19:42:55.340724  470984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:42:55.361379  470984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:42:55.376425  470984 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:42:55.530685  470984 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:42:55.670682  470984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:42:55.684640  470984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:42:55.703722  470984 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0819 19:42:55.703780  470984 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:42:55.703847  470984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:42:55.714592  470984 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:42:55.714681  470984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:42:55.725608  470984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:42:55.736674  470984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:42:55.747419  470984 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:42:55.758557  470984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:42:55.769367  470984 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:42:55.781024  470984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:42:55.791844  470984 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:42:55.801465  470984 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 19:42:55.801559  470984 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:42:55.811382  470984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:42:55.952799  470984 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:42:56.202800  470984 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:42:56.202881  470984 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:42:56.213515  470984 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0819 19:42:56.213546  470984 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0819 19:42:56.213553  470984 command_runner.go:130] > Device: 0,22	Inode: 1341        Links: 1
	I0819 19:42:56.213560  470984 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 19:42:56.213565  470984 command_runner.go:130] > Access: 2024-08-19 19:42:56.072560021 +0000
	I0819 19:42:56.213572  470984 command_runner.go:130] > Modify: 2024-08-19 19:42:56.055559563 +0000
	I0819 19:42:56.213578  470984 command_runner.go:130] > Change: 2024-08-19 19:42:56.055559563 +0000
	I0819 19:42:56.213584  470984 command_runner.go:130] >  Birth: -
	I0819 19:42:56.213758  470984 start.go:563] Will wait 60s for crictl version
	I0819 19:42:56.213833  470984 ssh_runner.go:195] Run: which crictl
	I0819 19:42:56.217708  470984 command_runner.go:130] > /usr/bin/crictl
	I0819 19:42:56.217868  470984 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:42:56.250994  470984 command_runner.go:130] > Version:  0.1.0
	I0819 19:42:56.251021  470984 command_runner.go:130] > RuntimeName:  cri-o
	I0819 19:42:56.251027  470984 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0819 19:42:56.251035  470984 command_runner.go:130] > RuntimeApiVersion:  v1
	I0819 19:42:56.251058  470984 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:42:56.251125  470984 ssh_runner.go:195] Run: crio --version
	I0819 19:42:56.281149  470984 command_runner.go:130] > crio version 1.29.1
	I0819 19:42:56.281181  470984 command_runner.go:130] > Version:        1.29.1
	I0819 19:42:56.281189  470984 command_runner.go:130] > GitCommit:      unknown
	I0819 19:42:56.281195  470984 command_runner.go:130] > GitCommitDate:  unknown
	I0819 19:42:56.281199  470984 command_runner.go:130] > GitTreeState:   clean
	I0819 19:42:56.281207  470984 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 19:42:56.281212  470984 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 19:42:56.281216  470984 command_runner.go:130] > Compiler:       gc
	I0819 19:42:56.281221  470984 command_runner.go:130] > Platform:       linux/amd64
	I0819 19:42:56.281225  470984 command_runner.go:130] > Linkmode:       dynamic
	I0819 19:42:56.281229  470984 command_runner.go:130] > BuildTags:      
	I0819 19:42:56.281235  470984 command_runner.go:130] >   containers_image_ostree_stub
	I0819 19:42:56.281242  470984 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 19:42:56.281248  470984 command_runner.go:130] >   btrfs_noversion
	I0819 19:42:56.281257  470984 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 19:42:56.281264  470984 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 19:42:56.281273  470984 command_runner.go:130] >   seccomp
	I0819 19:42:56.281279  470984 command_runner.go:130] > LDFlags:          unknown
	I0819 19:42:56.281284  470984 command_runner.go:130] > SeccompEnabled:   true
	I0819 19:42:56.281289  470984 command_runner.go:130] > AppArmorEnabled:  false
	I0819 19:42:56.281379  470984 ssh_runner.go:195] Run: crio --version
	I0819 19:42:56.311464  470984 command_runner.go:130] > crio version 1.29.1
	I0819 19:42:56.311490  470984 command_runner.go:130] > Version:        1.29.1
	I0819 19:42:56.311498  470984 command_runner.go:130] > GitCommit:      unknown
	I0819 19:42:56.311505  470984 command_runner.go:130] > GitCommitDate:  unknown
	I0819 19:42:56.311511  470984 command_runner.go:130] > GitTreeState:   clean
	I0819 19:42:56.311518  470984 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 19:42:56.311524  470984 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 19:42:56.311529  470984 command_runner.go:130] > Compiler:       gc
	I0819 19:42:56.311535  470984 command_runner.go:130] > Platform:       linux/amd64
	I0819 19:42:56.311541  470984 command_runner.go:130] > Linkmode:       dynamic
	I0819 19:42:56.311547  470984 command_runner.go:130] > BuildTags:      
	I0819 19:42:56.311560  470984 command_runner.go:130] >   containers_image_ostree_stub
	I0819 19:42:56.311567  470984 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 19:42:56.311572  470984 command_runner.go:130] >   btrfs_noversion
	I0819 19:42:56.311580  470984 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 19:42:56.311627  470984 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 19:42:56.311659  470984 command_runner.go:130] >   seccomp
	I0819 19:42:56.311666  470984 command_runner.go:130] > LDFlags:          unknown
	I0819 19:42:56.311672  470984 command_runner.go:130] > SeccompEnabled:   true
	I0819 19:42:56.311678  470984 command_runner.go:130] > AppArmorEnabled:  false
	I0819 19:42:56.314926  470984 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:42:56.316383  470984 main.go:141] libmachine: (multinode-548379) Calling .GetIP
	I0819 19:42:56.319200  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:56.319602  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:42:56.319635  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:56.319795  470984 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 19:42:56.324459  470984 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0819 19:42:56.324646  470984 kubeadm.go:883] updating cluster {Name:multinode-548379 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-548379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.197 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:42:56.324805  470984 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:42:56.324865  470984 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:42:56.367605  470984 command_runner.go:130] > {
	I0819 19:42:56.367632  470984 command_runner.go:130] >   "images": [
	I0819 19:42:56.367636  470984 command_runner.go:130] >     {
	I0819 19:42:56.367645  470984 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 19:42:56.367649  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.367655  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 19:42:56.367659  470984 command_runner.go:130] >       ],
	I0819 19:42:56.367663  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.367678  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 19:42:56.367686  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 19:42:56.367689  470984 command_runner.go:130] >       ],
	I0819 19:42:56.367693  470984 command_runner.go:130] >       "size": "87165492",
	I0819 19:42:56.367697  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.367701  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.367707  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.367713  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.367717  470984 command_runner.go:130] >     },
	I0819 19:42:56.367720  470984 command_runner.go:130] >     {
	I0819 19:42:56.367726  470984 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 19:42:56.367730  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.367735  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 19:42:56.367743  470984 command_runner.go:130] >       ],
	I0819 19:42:56.367748  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.367757  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 19:42:56.367768  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 19:42:56.367773  470984 command_runner.go:130] >       ],
	I0819 19:42:56.367783  470984 command_runner.go:130] >       "size": "87190579",
	I0819 19:42:56.367791  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.367801  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.367810  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.367818  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.367823  470984 command_runner.go:130] >     },
	I0819 19:42:56.367830  470984 command_runner.go:130] >     {
	I0819 19:42:56.367838  470984 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 19:42:56.367847  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.367855  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 19:42:56.367864  470984 command_runner.go:130] >       ],
	I0819 19:42:56.367873  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.367885  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 19:42:56.367898  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 19:42:56.367906  470984 command_runner.go:130] >       ],
	I0819 19:42:56.367916  470984 command_runner.go:130] >       "size": "1363676",
	I0819 19:42:56.367926  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.367935  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.367944  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.367953  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.367961  470984 command_runner.go:130] >     },
	I0819 19:42:56.367966  470984 command_runner.go:130] >     {
	I0819 19:42:56.367980  470984 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 19:42:56.367986  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.367992  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 19:42:56.367997  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368002  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.368011  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 19:42:56.368024  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 19:42:56.368030  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368035  470984 command_runner.go:130] >       "size": "31470524",
	I0819 19:42:56.368041  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.368046  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.368052  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.368057  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.368063  470984 command_runner.go:130] >     },
	I0819 19:42:56.368096  470984 command_runner.go:130] >     {
	I0819 19:42:56.368111  470984 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 19:42:56.368115  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.368121  470984 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 19:42:56.368127  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368131  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.368140  470984 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 19:42:56.368148  470984 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 19:42:56.368154  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368159  470984 command_runner.go:130] >       "size": "61245718",
	I0819 19:42:56.368165  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.368170  470984 command_runner.go:130] >       "username": "nonroot",
	I0819 19:42:56.368176  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.368180  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.368186  470984 command_runner.go:130] >     },
	I0819 19:42:56.368190  470984 command_runner.go:130] >     {
	I0819 19:42:56.368198  470984 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 19:42:56.368204  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.368209  470984 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 19:42:56.368215  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368219  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.368228  470984 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 19:42:56.368235  470984 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 19:42:56.368241  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368246  470984 command_runner.go:130] >       "size": "149009664",
	I0819 19:42:56.368252  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.368256  470984 command_runner.go:130] >         "value": "0"
	I0819 19:42:56.368261  470984 command_runner.go:130] >       },
	I0819 19:42:56.368266  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.368272  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.368276  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.368281  470984 command_runner.go:130] >     },
	I0819 19:42:56.368285  470984 command_runner.go:130] >     {
	I0819 19:42:56.368304  470984 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 19:42:56.368315  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.368320  470984 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 19:42:56.368326  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368330  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.368339  470984 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 19:42:56.368348  470984 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 19:42:56.368354  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368359  470984 command_runner.go:130] >       "size": "95233506",
	I0819 19:42:56.368364  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.368368  470984 command_runner.go:130] >         "value": "0"
	I0819 19:42:56.368374  470984 command_runner.go:130] >       },
	I0819 19:42:56.368378  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.368384  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.368388  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.368394  470984 command_runner.go:130] >     },
	I0819 19:42:56.368397  470984 command_runner.go:130] >     {
	I0819 19:42:56.368403  470984 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 19:42:56.368409  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.368415  470984 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 19:42:56.368421  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368426  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.368444  470984 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 19:42:56.368454  470984 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 19:42:56.368460  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368464  470984 command_runner.go:130] >       "size": "89437512",
	I0819 19:42:56.368470  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.368474  470984 command_runner.go:130] >         "value": "0"
	I0819 19:42:56.368480  470984 command_runner.go:130] >       },
	I0819 19:42:56.368483  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.368487  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.368491  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.368494  470984 command_runner.go:130] >     },
	I0819 19:42:56.368497  470984 command_runner.go:130] >     {
	I0819 19:42:56.368503  470984 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 19:42:56.368506  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.368511  470984 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 19:42:56.368515  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368518  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.368526  470984 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 19:42:56.368538  470984 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 19:42:56.368543  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368555  470984 command_runner.go:130] >       "size": "92728217",
	I0819 19:42:56.368558  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.368562  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.368566  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.368570  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.368574  470984 command_runner.go:130] >     },
	I0819 19:42:56.368577  470984 command_runner.go:130] >     {
	I0819 19:42:56.368583  470984 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 19:42:56.368592  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.368597  470984 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 19:42:56.368601  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368607  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.368618  470984 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 19:42:56.368628  470984 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 19:42:56.368634  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368641  470984 command_runner.go:130] >       "size": "68420936",
	I0819 19:42:56.368650  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.368657  470984 command_runner.go:130] >         "value": "0"
	I0819 19:42:56.368664  470984 command_runner.go:130] >       },
	I0819 19:42:56.368671  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.368680  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.368689  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.368698  470984 command_runner.go:130] >     },
	I0819 19:42:56.368705  470984 command_runner.go:130] >     {
	I0819 19:42:56.368711  470984 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 19:42:56.368718  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.368722  470984 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 19:42:56.368728  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368733  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.368742  470984 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 19:42:56.368751  470984 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 19:42:56.368758  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368762  470984 command_runner.go:130] >       "size": "742080",
	I0819 19:42:56.368767  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.368771  470984 command_runner.go:130] >         "value": "65535"
	I0819 19:42:56.368775  470984 command_runner.go:130] >       },
	I0819 19:42:56.368785  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.368791  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.368795  470984 command_runner.go:130] >       "pinned": true
	I0819 19:42:56.368800  470984 command_runner.go:130] >     }
	I0819 19:42:56.368804  470984 command_runner.go:130] >   ]
	I0819 19:42:56.368810  470984 command_runner.go:130] > }
	I0819 19:42:56.369014  470984 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:42:56.369029  470984 crio.go:433] Images already preloaded, skipping extraction
	I0819 19:42:56.369080  470984 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:42:56.406444  470984 command_runner.go:130] > {
	I0819 19:42:56.406475  470984 command_runner.go:130] >   "images": [
	I0819 19:42:56.406482  470984 command_runner.go:130] >     {
	I0819 19:42:56.406495  470984 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 19:42:56.406501  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.406507  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 19:42:56.406511  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406515  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.406537  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 19:42:56.406547  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 19:42:56.406553  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406561  470984 command_runner.go:130] >       "size": "87165492",
	I0819 19:42:56.406568  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.406573  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.406581  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.406585  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.406594  470984 command_runner.go:130] >     },
	I0819 19:42:56.406600  470984 command_runner.go:130] >     {
	I0819 19:42:56.406612  470984 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 19:42:56.406621  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.406630  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 19:42:56.406641  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406648  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.406655  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 19:42:56.406662  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 19:42:56.406668  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406673  470984 command_runner.go:130] >       "size": "87190579",
	I0819 19:42:56.406677  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.406683  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.406689  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.406693  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.406697  470984 command_runner.go:130] >     },
	I0819 19:42:56.406700  470984 command_runner.go:130] >     {
	I0819 19:42:56.406706  470984 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 19:42:56.406713  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.406717  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 19:42:56.406723  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406727  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.406735  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 19:42:56.406744  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 19:42:56.406747  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406752  470984 command_runner.go:130] >       "size": "1363676",
	I0819 19:42:56.406758  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.406762  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.406768  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.406772  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.406775  470984 command_runner.go:130] >     },
	I0819 19:42:56.406778  470984 command_runner.go:130] >     {
	I0819 19:42:56.406784  470984 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 19:42:56.406790  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.406795  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 19:42:56.406801  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406805  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.406814  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 19:42:56.406825  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 19:42:56.406830  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406834  470984 command_runner.go:130] >       "size": "31470524",
	I0819 19:42:56.406839  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.406850  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.406855  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.406859  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.406863  470984 command_runner.go:130] >     },
	I0819 19:42:56.406868  470984 command_runner.go:130] >     {
	I0819 19:42:56.406874  470984 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 19:42:56.406878  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.406882  470984 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 19:42:56.406888  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406892  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.406901  470984 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 19:42:56.406907  470984 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 19:42:56.406913  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406917  470984 command_runner.go:130] >       "size": "61245718",
	I0819 19:42:56.406920  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.406925  470984 command_runner.go:130] >       "username": "nonroot",
	I0819 19:42:56.406928  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.406932  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.406936  470984 command_runner.go:130] >     },
	I0819 19:42:56.406939  470984 command_runner.go:130] >     {
	I0819 19:42:56.406947  470984 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 19:42:56.406951  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.406958  470984 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 19:42:56.406961  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406965  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.406972  470984 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 19:42:56.406980  470984 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 19:42:56.406984  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406988  470984 command_runner.go:130] >       "size": "149009664",
	I0819 19:42:56.406992  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.406996  470984 command_runner.go:130] >         "value": "0"
	I0819 19:42:56.406999  470984 command_runner.go:130] >       },
	I0819 19:42:56.407003  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.407007  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.407011  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.407016  470984 command_runner.go:130] >     },
	I0819 19:42:56.407019  470984 command_runner.go:130] >     {
	I0819 19:42:56.407025  470984 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 19:42:56.407031  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.407035  470984 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 19:42:56.407041  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407045  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.407052  470984 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 19:42:56.407063  470984 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 19:42:56.407069  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407073  470984 command_runner.go:130] >       "size": "95233506",
	I0819 19:42:56.407077  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.407081  470984 command_runner.go:130] >         "value": "0"
	I0819 19:42:56.407084  470984 command_runner.go:130] >       },
	I0819 19:42:56.407088  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.407094  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.407098  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.407101  470984 command_runner.go:130] >     },
	I0819 19:42:56.407105  470984 command_runner.go:130] >     {
	I0819 19:42:56.407111  470984 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 19:42:56.407118  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.407123  470984 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 19:42:56.407127  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407131  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.407145  470984 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 19:42:56.407155  470984 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 19:42:56.407159  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407163  470984 command_runner.go:130] >       "size": "89437512",
	I0819 19:42:56.407169  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.407173  470984 command_runner.go:130] >         "value": "0"
	I0819 19:42:56.407178  470984 command_runner.go:130] >       },
	I0819 19:42:56.407183  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.407188  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.407193  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.407198  470984 command_runner.go:130] >     },
	I0819 19:42:56.407202  470984 command_runner.go:130] >     {
	I0819 19:42:56.407210  470984 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 19:42:56.407215  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.407219  470984 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 19:42:56.407225  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407229  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.407236  470984 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 19:42:56.407245  470984 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 19:42:56.407249  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407253  470984 command_runner.go:130] >       "size": "92728217",
	I0819 19:42:56.407259  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.407263  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.407269  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.407273  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.407276  470984 command_runner.go:130] >     },
	I0819 19:42:56.407280  470984 command_runner.go:130] >     {
	I0819 19:42:56.407288  470984 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 19:42:56.407294  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.407299  470984 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 19:42:56.407304  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407308  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.407317  470984 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 19:42:56.407326  470984 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 19:42:56.407332  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407336  470984 command_runner.go:130] >       "size": "68420936",
	I0819 19:42:56.407340  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.407344  470984 command_runner.go:130] >         "value": "0"
	I0819 19:42:56.407347  470984 command_runner.go:130] >       },
	I0819 19:42:56.407351  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.407355  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.407359  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.407362  470984 command_runner.go:130] >     },
	I0819 19:42:56.407365  470984 command_runner.go:130] >     {
	I0819 19:42:56.407371  470984 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 19:42:56.407377  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.407381  470984 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 19:42:56.407385  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407392  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.407399  470984 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 19:42:56.407407  470984 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 19:42:56.407411  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407415  470984 command_runner.go:130] >       "size": "742080",
	I0819 19:42:56.407421  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.407425  470984 command_runner.go:130] >         "value": "65535"
	I0819 19:42:56.407430  470984 command_runner.go:130] >       },
	I0819 19:42:56.407434  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.407438  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.407444  470984 command_runner.go:130] >       "pinned": true
	I0819 19:42:56.407447  470984 command_runner.go:130] >     }
	I0819 19:42:56.407451  470984 command_runner.go:130] >   ]
	I0819 19:42:56.407454  470984 command_runner.go:130] > }
	I0819 19:42:56.407578  470984 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:42:56.407591  470984 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:42:56.407598  470984 kubeadm.go:934] updating node { 192.168.39.35 8443 v1.31.0 crio true true} ...
	I0819 19:42:56.407704  470984 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-548379 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.35
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-548379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:42:56.407766  470984 ssh_runner.go:195] Run: crio config
	I0819 19:42:56.441110  470984 command_runner.go:130] ! time="2024-08-19 19:42:56.404175179Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0819 19:42:56.447657  470984 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0819 19:42:56.457302  470984 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0819 19:42:56.457329  470984 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0819 19:42:56.457339  470984 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0819 19:42:56.457344  470984 command_runner.go:130] > #
	I0819 19:42:56.457354  470984 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0819 19:42:56.457363  470984 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0819 19:42:56.457372  470984 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0819 19:42:56.457381  470984 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0819 19:42:56.457389  470984 command_runner.go:130] > # reload'.
	I0819 19:42:56.457397  470984 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0819 19:42:56.457408  470984 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0819 19:42:56.457419  470984 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0819 19:42:56.457432  470984 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0819 19:42:56.457440  470984 command_runner.go:130] > [crio]
	I0819 19:42:56.457450  470984 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0819 19:42:56.457460  470984 command_runner.go:130] > # containers images, in this directory.
	I0819 19:42:56.457470  470984 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0819 19:42:56.457482  470984 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0819 19:42:56.457490  470984 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0819 19:42:56.457500  470984 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0819 19:42:56.457510  470984 command_runner.go:130] > # imagestore = ""
	I0819 19:42:56.457519  470984 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0819 19:42:56.457527  470984 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0819 19:42:56.457533  470984 command_runner.go:130] > storage_driver = "overlay"
	I0819 19:42:56.457539  470984 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0819 19:42:56.457547  470984 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0819 19:42:56.457553  470984 command_runner.go:130] > storage_option = [
	I0819 19:42:56.457558  470984 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0819 19:42:56.457563  470984 command_runner.go:130] > ]
	I0819 19:42:56.457570  470984 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0819 19:42:56.457578  470984 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0819 19:42:56.457584  470984 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0819 19:42:56.457590  470984 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0819 19:42:56.457598  470984 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0819 19:42:56.457603  470984 command_runner.go:130] > # always happen on a node reboot
	I0819 19:42:56.457610  470984 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0819 19:42:56.457626  470984 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0819 19:42:56.457638  470984 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0819 19:42:56.457649  470984 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0819 19:42:56.457660  470984 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0819 19:42:56.457674  470984 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0819 19:42:56.457690  470984 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0819 19:42:56.457700  470984 command_runner.go:130] > # internal_wipe = true
	I0819 19:42:56.457713  470984 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0819 19:42:56.457721  470984 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0819 19:42:56.457728  470984 command_runner.go:130] > # internal_repair = false
	I0819 19:42:56.457734  470984 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0819 19:42:56.457747  470984 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0819 19:42:56.457755  470984 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0819 19:42:56.457764  470984 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0819 19:42:56.457772  470984 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0819 19:42:56.457777  470984 command_runner.go:130] > [crio.api]
	I0819 19:42:56.457784  470984 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0819 19:42:56.457791  470984 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0819 19:42:56.457796  470984 command_runner.go:130] > # IP address on which the stream server will listen.
	I0819 19:42:56.457803  470984 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0819 19:42:56.457809  470984 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0819 19:42:56.457817  470984 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0819 19:42:56.457821  470984 command_runner.go:130] > # stream_port = "0"
	I0819 19:42:56.457828  470984 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0819 19:42:56.457832  470984 command_runner.go:130] > # stream_enable_tls = false
	I0819 19:42:56.457840  470984 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0819 19:42:56.457845  470984 command_runner.go:130] > # stream_idle_timeout = ""
	I0819 19:42:56.457853  470984 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0819 19:42:56.457861  470984 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0819 19:42:56.457867  470984 command_runner.go:130] > # minutes.
	I0819 19:42:56.457872  470984 command_runner.go:130] > # stream_tls_cert = ""
	I0819 19:42:56.457879  470984 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0819 19:42:56.457885  470984 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0819 19:42:56.457891  470984 command_runner.go:130] > # stream_tls_key = ""
	I0819 19:42:56.457900  470984 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0819 19:42:56.457908  470984 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0819 19:42:56.457925  470984 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0819 19:42:56.457931  470984 command_runner.go:130] > # stream_tls_ca = ""
	I0819 19:42:56.457939  470984 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 19:42:56.457946  470984 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0819 19:42:56.457953  470984 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 19:42:56.457959  470984 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0819 19:42:56.457966  470984 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0819 19:42:56.457973  470984 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0819 19:42:56.457977  470984 command_runner.go:130] > [crio.runtime]
	I0819 19:42:56.457985  470984 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0819 19:42:56.457993  470984 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0819 19:42:56.457997  470984 command_runner.go:130] > # "nofile=1024:2048"
	I0819 19:42:56.458005  470984 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0819 19:42:56.458010  470984 command_runner.go:130] > # default_ulimits = [
	I0819 19:42:56.458013  470984 command_runner.go:130] > # ]
	I0819 19:42:56.458021  470984 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0819 19:42:56.458027  470984 command_runner.go:130] > # no_pivot = false
	I0819 19:42:56.458033  470984 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0819 19:42:56.458042  470984 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0819 19:42:56.458048  470984 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0819 19:42:56.458054  470984 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0819 19:42:56.458060  470984 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0819 19:42:56.458066  470984 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 19:42:56.458073  470984 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0819 19:42:56.458077  470984 command_runner.go:130] > # Cgroup setting for conmon
	I0819 19:42:56.458086  470984 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0819 19:42:56.458092  470984 command_runner.go:130] > conmon_cgroup = "pod"
	I0819 19:42:56.458098  470984 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0819 19:42:56.458104  470984 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0819 19:42:56.458111  470984 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 19:42:56.458117  470984 command_runner.go:130] > conmon_env = [
	I0819 19:42:56.458122  470984 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 19:42:56.458127  470984 command_runner.go:130] > ]
	I0819 19:42:56.458132  470984 command_runner.go:130] > # Additional environment variables to set for all the
	I0819 19:42:56.458140  470984 command_runner.go:130] > # containers. These are overridden if set in the
	I0819 19:42:56.458146  470984 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0819 19:42:56.458152  470984 command_runner.go:130] > # default_env = [
	I0819 19:42:56.458155  470984 command_runner.go:130] > # ]
	I0819 19:42:56.458162  470984 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0819 19:42:56.458170  470984 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0819 19:42:56.458175  470984 command_runner.go:130] > # selinux = false
	I0819 19:42:56.458182  470984 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0819 19:42:56.458190  470984 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0819 19:42:56.458195  470984 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0819 19:42:56.458201  470984 command_runner.go:130] > # seccomp_profile = ""
	I0819 19:42:56.458206  470984 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0819 19:42:56.458214  470984 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0819 19:42:56.458219  470984 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0819 19:42:56.458226  470984 command_runner.go:130] > # which might increase security.
	I0819 19:42:56.458230  470984 command_runner.go:130] > # This option is currently deprecated,
	I0819 19:42:56.458235  470984 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0819 19:42:56.458243  470984 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0819 19:42:56.458250  470984 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0819 19:42:56.458258  470984 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0819 19:42:56.458266  470984 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0819 19:42:56.458271  470984 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0819 19:42:56.458278  470984 command_runner.go:130] > # This option supports live configuration reload.
	I0819 19:42:56.458283  470984 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0819 19:42:56.458290  470984 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0819 19:42:56.458296  470984 command_runner.go:130] > # the cgroup blockio controller.
	I0819 19:42:56.458302  470984 command_runner.go:130] > # blockio_config_file = ""
	I0819 19:42:56.458308  470984 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0819 19:42:56.458314  470984 command_runner.go:130] > # blockio parameters.
	I0819 19:42:56.458318  470984 command_runner.go:130] > # blockio_reload = false
	I0819 19:42:56.458324  470984 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0819 19:42:56.458330  470984 command_runner.go:130] > # irqbalance daemon.
	I0819 19:42:56.458335  470984 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0819 19:42:56.458343  470984 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0819 19:42:56.458352  470984 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0819 19:42:56.458361  470984 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0819 19:42:56.458369  470984 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0819 19:42:56.458377  470984 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0819 19:42:56.458382  470984 command_runner.go:130] > # This option supports live configuration reload.
	I0819 19:42:56.458388  470984 command_runner.go:130] > # rdt_config_file = ""
	I0819 19:42:56.458393  470984 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0819 19:42:56.458399  470984 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0819 19:42:56.458415  470984 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0819 19:42:56.458422  470984 command_runner.go:130] > # separate_pull_cgroup = ""
	I0819 19:42:56.458428  470984 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0819 19:42:56.458435  470984 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0819 19:42:56.458440  470984 command_runner.go:130] > # will be added.
	I0819 19:42:56.458444  470984 command_runner.go:130] > # default_capabilities = [
	I0819 19:42:56.458450  470984 command_runner.go:130] > # 	"CHOWN",
	I0819 19:42:56.458454  470984 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0819 19:42:56.458460  470984 command_runner.go:130] > # 	"FSETID",
	I0819 19:42:56.458464  470984 command_runner.go:130] > # 	"FOWNER",
	I0819 19:42:56.458469  470984 command_runner.go:130] > # 	"SETGID",
	I0819 19:42:56.458473  470984 command_runner.go:130] > # 	"SETUID",
	I0819 19:42:56.458478  470984 command_runner.go:130] > # 	"SETPCAP",
	I0819 19:42:56.458482  470984 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0819 19:42:56.458488  470984 command_runner.go:130] > # 	"KILL",
	I0819 19:42:56.458492  470984 command_runner.go:130] > # ]
	I0819 19:42:56.458502  470984 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0819 19:42:56.458510  470984 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0819 19:42:56.458518  470984 command_runner.go:130] > # add_inheritable_capabilities = false
	I0819 19:42:56.458525  470984 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0819 19:42:56.458533  470984 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 19:42:56.458538  470984 command_runner.go:130] > default_sysctls = [
	I0819 19:42:56.458543  470984 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0819 19:42:56.458548  470984 command_runner.go:130] > ]
	I0819 19:42:56.458553  470984 command_runner.go:130] > # List of devices on the host that a
	I0819 19:42:56.458561  470984 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0819 19:42:56.458568  470984 command_runner.go:130] > # allowed_devices = [
	I0819 19:42:56.458572  470984 command_runner.go:130] > # 	"/dev/fuse",
	I0819 19:42:56.458577  470984 command_runner.go:130] > # ]
	I0819 19:42:56.458581  470984 command_runner.go:130] > # List of additional devices. specified as
	I0819 19:42:56.458590  470984 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0819 19:42:56.458597  470984 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0819 19:42:56.458603  470984 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 19:42:56.458612  470984 command_runner.go:130] > # additional_devices = [
	I0819 19:42:56.458620  470984 command_runner.go:130] > # ]
	I0819 19:42:56.458628  470984 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0819 19:42:56.458636  470984 command_runner.go:130] > # cdi_spec_dirs = [
	I0819 19:42:56.458645  470984 command_runner.go:130] > # 	"/etc/cdi",
	I0819 19:42:56.458652  470984 command_runner.go:130] > # 	"/var/run/cdi",
	I0819 19:42:56.458660  470984 command_runner.go:130] > # ]
	I0819 19:42:56.458672  470984 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0819 19:42:56.458683  470984 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0819 19:42:56.458689  470984 command_runner.go:130] > # Defaults to false.
	I0819 19:42:56.458694  470984 command_runner.go:130] > # device_ownership_from_security_context = false
	I0819 19:42:56.458702  470984 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0819 19:42:56.458711  470984 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0819 19:42:56.458715  470984 command_runner.go:130] > # hooks_dir = [
	I0819 19:42:56.458720  470984 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0819 19:42:56.458728  470984 command_runner.go:130] > # ]
	I0819 19:42:56.458737  470984 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0819 19:42:56.458751  470984 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0819 19:42:56.458759  470984 command_runner.go:130] > # its default mounts from the following two files:
	I0819 19:42:56.458762  470984 command_runner.go:130] > #
	I0819 19:42:56.458768  470984 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0819 19:42:56.458776  470984 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0819 19:42:56.458784  470984 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0819 19:42:56.458788  470984 command_runner.go:130] > #
	I0819 19:42:56.458796  470984 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0819 19:42:56.458804  470984 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0819 19:42:56.458811  470984 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0819 19:42:56.458819  470984 command_runner.go:130] > #      only add mounts it finds in this file.
	I0819 19:42:56.458822  470984 command_runner.go:130] > #
	I0819 19:42:56.458826  470984 command_runner.go:130] > # default_mounts_file = ""
	I0819 19:42:56.458834  470984 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0819 19:42:56.458840  470984 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0819 19:42:56.458845  470984 command_runner.go:130] > pids_limit = 1024
	I0819 19:42:56.458851  470984 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0819 19:42:56.458858  470984 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0819 19:42:56.458864  470984 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0819 19:42:56.458874  470984 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0819 19:42:56.458880  470984 command_runner.go:130] > # log_size_max = -1
	I0819 19:42:56.458887  470984 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0819 19:42:56.458893  470984 command_runner.go:130] > # log_to_journald = false
	I0819 19:42:56.458899  470984 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0819 19:42:56.458907  470984 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0819 19:42:56.458912  470984 command_runner.go:130] > # Path to directory for container attach sockets.
	I0819 19:42:56.458919  470984 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0819 19:42:56.458924  470984 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0819 19:42:56.458930  470984 command_runner.go:130] > # bind_mount_prefix = ""
	I0819 19:42:56.458935  470984 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0819 19:42:56.458942  470984 command_runner.go:130] > # read_only = false
	I0819 19:42:56.458947  470984 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0819 19:42:56.458960  470984 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0819 19:42:56.458966  470984 command_runner.go:130] > # live configuration reload.
	I0819 19:42:56.458970  470984 command_runner.go:130] > # log_level = "info"
	I0819 19:42:56.458978  470984 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0819 19:42:56.458986  470984 command_runner.go:130] > # This option supports live configuration reload.
	I0819 19:42:56.458990  470984 command_runner.go:130] > # log_filter = ""
	I0819 19:42:56.458998  470984 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0819 19:42:56.459005  470984 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0819 19:42:56.459012  470984 command_runner.go:130] > # separated by comma.
	I0819 19:42:56.459019  470984 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 19:42:56.459026  470984 command_runner.go:130] > # uid_mappings = ""
	I0819 19:42:56.459031  470984 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0819 19:42:56.459039  470984 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0819 19:42:56.459043  470984 command_runner.go:130] > # separated by comma.
	I0819 19:42:56.459051  470984 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 19:42:56.459057  470984 command_runner.go:130] > # gid_mappings = ""
	I0819 19:42:56.459063  470984 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0819 19:42:56.459071  470984 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 19:42:56.459079  470984 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 19:42:56.459086  470984 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 19:42:56.459092  470984 command_runner.go:130] > # minimum_mappable_uid = -1
	I0819 19:42:56.459098  470984 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0819 19:42:56.459107  470984 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 19:42:56.459113  470984 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 19:42:56.459121  470984 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 19:42:56.459128  470984 command_runner.go:130] > # minimum_mappable_gid = -1
	I0819 19:42:56.459134  470984 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0819 19:42:56.459141  470984 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0819 19:42:56.459146  470984 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0819 19:42:56.459152  470984 command_runner.go:130] > # ctr_stop_timeout = 30
	I0819 19:42:56.459158  470984 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0819 19:42:56.459166  470984 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0819 19:42:56.459171  470984 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0819 19:42:56.459178  470984 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0819 19:42:56.459182  470984 command_runner.go:130] > drop_infra_ctr = false
	I0819 19:42:56.459190  470984 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0819 19:42:56.459197  470984 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0819 19:42:56.459204  470984 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0819 19:42:56.459211  470984 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0819 19:42:56.459218  470984 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0819 19:42:56.459226  470984 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0819 19:42:56.459231  470984 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0819 19:42:56.459238  470984 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0819 19:42:56.459242  470984 command_runner.go:130] > # shared_cpuset = ""
	I0819 19:42:56.459247  470984 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0819 19:42:56.459254  470984 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0819 19:42:56.459259  470984 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0819 19:42:56.459265  470984 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0819 19:42:56.459271  470984 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0819 19:42:56.459276  470984 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0819 19:42:56.459284  470984 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0819 19:42:56.459288  470984 command_runner.go:130] > # enable_criu_support = false
	I0819 19:42:56.459294  470984 command_runner.go:130] > # Enable/disable the generation of the container,
	I0819 19:42:56.459302  470984 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0819 19:42:56.459306  470984 command_runner.go:130] > # enable_pod_events = false
	I0819 19:42:56.459314  470984 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 19:42:56.459320  470984 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 19:42:56.459327  470984 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0819 19:42:56.459331  470984 command_runner.go:130] > # default_runtime = "runc"
	I0819 19:42:56.459338  470984 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0819 19:42:56.459345  470984 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0819 19:42:56.459356  470984 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0819 19:42:56.459364  470984 command_runner.go:130] > # creation as a file is not desired either.
	I0819 19:42:56.459371  470984 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0819 19:42:56.459379  470984 command_runner.go:130] > # the hostname is being managed dynamically.
	I0819 19:42:56.459383  470984 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0819 19:42:56.459389  470984 command_runner.go:130] > # ]
	I0819 19:42:56.459395  470984 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0819 19:42:56.459403  470984 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0819 19:42:56.459410  470984 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0819 19:42:56.459417  470984 command_runner.go:130] > # Each entry in the table should follow the format:
	I0819 19:42:56.459420  470984 command_runner.go:130] > #
	I0819 19:42:56.459426  470984 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0819 19:42:56.459431  470984 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0819 19:42:56.459457  470984 command_runner.go:130] > # runtime_type = "oci"
	I0819 19:42:56.459464  470984 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0819 19:42:56.459468  470984 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0819 19:42:56.459475  470984 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0819 19:42:56.459479  470984 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0819 19:42:56.459485  470984 command_runner.go:130] > # monitor_env = []
	I0819 19:42:56.459490  470984 command_runner.go:130] > # privileged_without_host_devices = false
	I0819 19:42:56.459497  470984 command_runner.go:130] > # allowed_annotations = []
	I0819 19:42:56.459503  470984 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0819 19:42:56.459509  470984 command_runner.go:130] > # Where:
	I0819 19:42:56.459514  470984 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0819 19:42:56.459522  470984 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0819 19:42:56.459531  470984 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0819 19:42:56.459539  470984 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0819 19:42:56.459543  470984 command_runner.go:130] > #   in $PATH.
	I0819 19:42:56.459550  470984 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0819 19:42:56.459556  470984 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0819 19:42:56.459563  470984 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0819 19:42:56.459568  470984 command_runner.go:130] > #   state.
	I0819 19:42:56.459574  470984 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0819 19:42:56.459582  470984 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0819 19:42:56.459591  470984 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0819 19:42:56.459598  470984 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0819 19:42:56.459604  470984 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0819 19:42:56.459616  470984 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0819 19:42:56.459627  470984 command_runner.go:130] > #   The currently recognized values are:
	I0819 19:42:56.459639  470984 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0819 19:42:56.459653  470984 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0819 19:42:56.459665  470984 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0819 19:42:56.459677  470984 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0819 19:42:56.459691  470984 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0819 19:42:56.459701  470984 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0819 19:42:56.459709  470984 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0819 19:42:56.459718  470984 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0819 19:42:56.459725  470984 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0819 19:42:56.459734  470984 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0819 19:42:56.459745  470984 command_runner.go:130] > #   deprecated option "conmon".
	I0819 19:42:56.459754  470984 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0819 19:42:56.459762  470984 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0819 19:42:56.459768  470984 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0819 19:42:56.459775  470984 command_runner.go:130] > #   should be moved to the container's cgroup
	I0819 19:42:56.459781  470984 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0819 19:42:56.459788  470984 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0819 19:42:56.459794  470984 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0819 19:42:56.459801  470984 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0819 19:42:56.459804  470984 command_runner.go:130] > #
	I0819 19:42:56.459810  470984 command_runner.go:130] > # Using the seccomp notifier feature:
	I0819 19:42:56.459814  470984 command_runner.go:130] > #
	I0819 19:42:56.459820  470984 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0819 19:42:56.459828  470984 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0819 19:42:56.459832  470984 command_runner.go:130] > #
	I0819 19:42:56.459838  470984 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0819 19:42:56.459846  470984 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0819 19:42:56.459849  470984 command_runner.go:130] > #
	I0819 19:42:56.459855  470984 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0819 19:42:56.459860  470984 command_runner.go:130] > # feature.
	I0819 19:42:56.459863  470984 command_runner.go:130] > #
	I0819 19:42:56.459869  470984 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0819 19:42:56.459877  470984 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0819 19:42:56.459884  470984 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0819 19:42:56.459892  470984 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0819 19:42:56.459900  470984 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0819 19:42:56.459905  470984 command_runner.go:130] > #
	I0819 19:42:56.459911  470984 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0819 19:42:56.459919  470984 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0819 19:42:56.459923  470984 command_runner.go:130] > #
	I0819 19:42:56.459929  470984 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0819 19:42:56.459936  470984 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0819 19:42:56.459939  470984 command_runner.go:130] > #
	I0819 19:42:56.459945  470984 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0819 19:42:56.459953  470984 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0819 19:42:56.459957  470984 command_runner.go:130] > # limitation.
	I0819 19:42:56.459966  470984 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0819 19:42:56.459972  470984 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0819 19:42:56.459976  470984 command_runner.go:130] > runtime_type = "oci"
	I0819 19:42:56.459982  470984 command_runner.go:130] > runtime_root = "/run/runc"
	I0819 19:42:56.459986  470984 command_runner.go:130] > runtime_config_path = ""
	I0819 19:42:56.459992  470984 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0819 19:42:56.459998  470984 command_runner.go:130] > monitor_cgroup = "pod"
	I0819 19:42:56.460006  470984 command_runner.go:130] > monitor_exec_cgroup = ""
	I0819 19:42:56.460014  470984 command_runner.go:130] > monitor_env = [
	I0819 19:42:56.460027  470984 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 19:42:56.460033  470984 command_runner.go:130] > ]
	I0819 19:42:56.460040  470984 command_runner.go:130] > privileged_without_host_devices = false
	I0819 19:42:56.460052  470984 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0819 19:42:56.460062  470984 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0819 19:42:56.460074  470984 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0819 19:42:56.460088  470984 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0819 19:42:56.460098  470984 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0819 19:42:56.460107  470984 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0819 19:42:56.460117  470984 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0819 19:42:56.460126  470984 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0819 19:42:56.460132  470984 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0819 19:42:56.460139  470984 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0819 19:42:56.460142  470984 command_runner.go:130] > # Example:
	I0819 19:42:56.460147  470984 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0819 19:42:56.460151  470984 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0819 19:42:56.460156  470984 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0819 19:42:56.460161  470984 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0819 19:42:56.460164  470984 command_runner.go:130] > # cpuset = 0
	I0819 19:42:56.460168  470984 command_runner.go:130] > # cpushares = "0-1"
	I0819 19:42:56.460171  470984 command_runner.go:130] > # Where:
	I0819 19:42:56.460176  470984 command_runner.go:130] > # The workload name is workload-type.
	I0819 19:42:56.460182  470984 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0819 19:42:56.460187  470984 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0819 19:42:56.460192  470984 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0819 19:42:56.460199  470984 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0819 19:42:56.460205  470984 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0819 19:42:56.460211  470984 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0819 19:42:56.460217  470984 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0819 19:42:56.460221  470984 command_runner.go:130] > # Default value is set to true
	I0819 19:42:56.460225  470984 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0819 19:42:56.460230  470984 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0819 19:42:56.460235  470984 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0819 19:42:56.460239  470984 command_runner.go:130] > # Default value is set to 'false'
	I0819 19:42:56.460243  470984 command_runner.go:130] > # disable_hostport_mapping = false
	I0819 19:42:56.460249  470984 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0819 19:42:56.460252  470984 command_runner.go:130] > #
	I0819 19:42:56.460257  470984 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0819 19:42:56.460263  470984 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0819 19:42:56.460269  470984 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0819 19:42:56.460274  470984 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0819 19:42:56.460280  470984 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0819 19:42:56.460283  470984 command_runner.go:130] > [crio.image]
	I0819 19:42:56.460289  470984 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0819 19:42:56.460293  470984 command_runner.go:130] > # default_transport = "docker://"
	I0819 19:42:56.460299  470984 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0819 19:42:56.460305  470984 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0819 19:42:56.460313  470984 command_runner.go:130] > # global_auth_file = ""
	I0819 19:42:56.460319  470984 command_runner.go:130] > # The image used to instantiate infra containers.
	I0819 19:42:56.460326  470984 command_runner.go:130] > # This option supports live configuration reload.
	I0819 19:42:56.460331  470984 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0819 19:42:56.460339  470984 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0819 19:42:56.460346  470984 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0819 19:42:56.460353  470984 command_runner.go:130] > # This option supports live configuration reload.
	I0819 19:42:56.460357  470984 command_runner.go:130] > # pause_image_auth_file = ""
	I0819 19:42:56.460365  470984 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0819 19:42:56.460371  470984 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0819 19:42:56.460377  470984 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0819 19:42:56.460385  470984 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0819 19:42:56.460391  470984 command_runner.go:130] > # pause_command = "/pause"
	I0819 19:42:56.460396  470984 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0819 19:42:56.460404  470984 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0819 19:42:56.460410  470984 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0819 19:42:56.460420  470984 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0819 19:42:56.460428  470984 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0819 19:42:56.460434  470984 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0819 19:42:56.460439  470984 command_runner.go:130] > # pinned_images = [
	I0819 19:42:56.460443  470984 command_runner.go:130] > # ]
	I0819 19:42:56.460450  470984 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0819 19:42:56.460457  470984 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0819 19:42:56.460463  470984 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0819 19:42:56.460471  470984 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0819 19:42:56.460478  470984 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0819 19:42:56.460484  470984 command_runner.go:130] > # signature_policy = ""
	I0819 19:42:56.460490  470984 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0819 19:42:56.460501  470984 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0819 19:42:56.460509  470984 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0819 19:42:56.460515  470984 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0819 19:42:56.460523  470984 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0819 19:42:56.460528  470984 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0819 19:42:56.460535  470984 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0819 19:42:56.460541  470984 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0819 19:42:56.460547  470984 command_runner.go:130] > # changing them here.
	I0819 19:42:56.460551  470984 command_runner.go:130] > # insecure_registries = [
	I0819 19:42:56.460556  470984 command_runner.go:130] > # ]
	I0819 19:42:56.460562  470984 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0819 19:42:56.460569  470984 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0819 19:42:56.460573  470984 command_runner.go:130] > # image_volumes = "mkdir"
	I0819 19:42:56.460579  470984 command_runner.go:130] > # Temporary directory to use for storing big files
	I0819 19:42:56.460583  470984 command_runner.go:130] > # big_files_temporary_dir = ""
	I0819 19:42:56.460591  470984 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0819 19:42:56.460596  470984 command_runner.go:130] > # CNI plugins.
	I0819 19:42:56.460599  470984 command_runner.go:130] > [crio.network]
	I0819 19:42:56.460608  470984 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0819 19:42:56.460618  470984 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0819 19:42:56.460627  470984 command_runner.go:130] > # cni_default_network = ""
	I0819 19:42:56.460638  470984 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0819 19:42:56.460648  470984 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0819 19:42:56.460659  470984 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0819 19:42:56.460668  470984 command_runner.go:130] > # plugin_dirs = [
	I0819 19:42:56.460676  470984 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0819 19:42:56.460681  470984 command_runner.go:130] > # ]
	I0819 19:42:56.460691  470984 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0819 19:42:56.460700  470984 command_runner.go:130] > [crio.metrics]
	I0819 19:42:56.460710  470984 command_runner.go:130] > # Globally enable or disable metrics support.
	I0819 19:42:56.460716  470984 command_runner.go:130] > enable_metrics = true
	I0819 19:42:56.460724  470984 command_runner.go:130] > # Specify enabled metrics collectors.
	I0819 19:42:56.460729  470984 command_runner.go:130] > # Per default all metrics are enabled.
	I0819 19:42:56.460737  470984 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0819 19:42:56.460747  470984 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0819 19:42:56.460755  470984 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0819 19:42:56.460759  470984 command_runner.go:130] > # metrics_collectors = [
	I0819 19:42:56.460765  470984 command_runner.go:130] > # 	"operations",
	I0819 19:42:56.460770  470984 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0819 19:42:56.460776  470984 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0819 19:42:56.460781  470984 command_runner.go:130] > # 	"operations_errors",
	I0819 19:42:56.460787  470984 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0819 19:42:56.460791  470984 command_runner.go:130] > # 	"image_pulls_by_name",
	I0819 19:42:56.460797  470984 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0819 19:42:56.460801  470984 command_runner.go:130] > # 	"image_pulls_failures",
	I0819 19:42:56.460806  470984 command_runner.go:130] > # 	"image_pulls_successes",
	I0819 19:42:56.460812  470984 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0819 19:42:56.460815  470984 command_runner.go:130] > # 	"image_layer_reuse",
	I0819 19:42:56.460822  470984 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0819 19:42:56.460826  470984 command_runner.go:130] > # 	"containers_oom_total",
	I0819 19:42:56.460832  470984 command_runner.go:130] > # 	"containers_oom",
	I0819 19:42:56.460836  470984 command_runner.go:130] > # 	"processes_defunct",
	I0819 19:42:56.460841  470984 command_runner.go:130] > # 	"operations_total",
	I0819 19:42:56.460845  470984 command_runner.go:130] > # 	"operations_latency_seconds",
	I0819 19:42:56.460850  470984 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0819 19:42:56.460856  470984 command_runner.go:130] > # 	"operations_errors_total",
	I0819 19:42:56.460859  470984 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0819 19:42:56.460866  470984 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0819 19:42:56.460871  470984 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0819 19:42:56.460877  470984 command_runner.go:130] > # 	"image_pulls_success_total",
	I0819 19:42:56.460882  470984 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0819 19:42:56.460889  470984 command_runner.go:130] > # 	"containers_oom_count_total",
	I0819 19:42:56.460894  470984 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0819 19:42:56.460900  470984 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0819 19:42:56.460904  470984 command_runner.go:130] > # ]
	I0819 19:42:56.460911  470984 command_runner.go:130] > # The port on which the metrics server will listen.
	I0819 19:42:56.460915  470984 command_runner.go:130] > # metrics_port = 9090
	I0819 19:42:56.460921  470984 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0819 19:42:56.460925  470984 command_runner.go:130] > # metrics_socket = ""
	I0819 19:42:56.460932  470984 command_runner.go:130] > # The certificate for the secure metrics server.
	I0819 19:42:56.460938  470984 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0819 19:42:56.460946  470984 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0819 19:42:56.460953  470984 command_runner.go:130] > # certificate on any modification event.
	I0819 19:42:56.460957  470984 command_runner.go:130] > # metrics_cert = ""
	I0819 19:42:56.460964  470984 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0819 19:42:56.460969  470984 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0819 19:42:56.460975  470984 command_runner.go:130] > # metrics_key = ""
	I0819 19:42:56.460980  470984 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0819 19:42:56.460986  470984 command_runner.go:130] > [crio.tracing]
	I0819 19:42:56.460992  470984 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0819 19:42:56.460998  470984 command_runner.go:130] > # enable_tracing = false
	I0819 19:42:56.461004  470984 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0819 19:42:56.461010  470984 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0819 19:42:56.461017  470984 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0819 19:42:56.461023  470984 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0819 19:42:56.461027  470984 command_runner.go:130] > # CRI-O NRI configuration.
	I0819 19:42:56.461032  470984 command_runner.go:130] > [crio.nri]
	I0819 19:42:56.461036  470984 command_runner.go:130] > # Globally enable or disable NRI.
	I0819 19:42:56.461040  470984 command_runner.go:130] > # enable_nri = false
	I0819 19:42:56.461045  470984 command_runner.go:130] > # NRI socket to listen on.
	I0819 19:42:56.461049  470984 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0819 19:42:56.461056  470984 command_runner.go:130] > # NRI plugin directory to use.
	I0819 19:42:56.461060  470984 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0819 19:42:56.461067  470984 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0819 19:42:56.461072  470984 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0819 19:42:56.461079  470984 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0819 19:42:56.461085  470984 command_runner.go:130] > # nri_disable_connections = false
	I0819 19:42:56.461092  470984 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0819 19:42:56.461096  470984 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0819 19:42:56.461102  470984 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0819 19:42:56.461108  470984 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0819 19:42:56.461114  470984 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0819 19:42:56.461120  470984 command_runner.go:130] > [crio.stats]
	I0819 19:42:56.461126  470984 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0819 19:42:56.461146  470984 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0819 19:42:56.461154  470984 command_runner.go:130] > # stats_collection_period = 0
	I0819 19:42:56.461297  470984 cni.go:84] Creating CNI manager for ""
	I0819 19:42:56.461309  470984 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 19:42:56.461319  470984 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:42:56.461341  470984 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.35 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-548379 NodeName:multinode-548379 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.35"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.35 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:42:56.461505  470984 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.35
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-548379"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.35
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.35"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:42:56.461571  470984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:42:56.472279  470984 command_runner.go:130] > kubeadm
	I0819 19:42:56.472310  470984 command_runner.go:130] > kubectl
	I0819 19:42:56.472314  470984 command_runner.go:130] > kubelet
	I0819 19:42:56.472358  470984 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:42:56.472418  470984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:42:56.482707  470984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0819 19:42:56.500172  470984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:42:56.516942  470984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0819 19:42:56.534736  470984 ssh_runner.go:195] Run: grep 192.168.39.35	control-plane.minikube.internal$ /etc/hosts
	I0819 19:42:56.538949  470984 command_runner.go:130] > 192.168.39.35	control-plane.minikube.internal
	I0819 19:42:56.539129  470984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:42:56.683931  470984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:42:56.698805  470984 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379 for IP: 192.168.39.35
	I0819 19:42:56.698831  470984 certs.go:194] generating shared ca certs ...
	I0819 19:42:56.698850  470984 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:42:56.699010  470984 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 19:42:56.699046  470984 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 19:42:56.699056  470984 certs.go:256] generating profile certs ...
	I0819 19:42:56.699126  470984 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/client.key
	I0819 19:42:56.699179  470984 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/apiserver.key.1a7a4ed8
	I0819 19:42:56.699215  470984 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/proxy-client.key
	I0819 19:42:56.699226  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 19:42:56.699237  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 19:42:56.699249  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 19:42:56.699258  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 19:42:56.699270  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 19:42:56.699282  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 19:42:56.699294  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 19:42:56.699304  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 19:42:56.699406  470984 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 19:42:56.699439  470984 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 19:42:56.699448  470984 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 19:42:56.699470  470984 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:42:56.699500  470984 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:42:56.699521  470984 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 19:42:56.699557  470984 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:42:56.699585  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /usr/share/ca-certificates/4381592.pem
	I0819 19:42:56.699600  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:42:56.699612  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem -> /usr/share/ca-certificates/438159.pem
	I0819 19:42:56.700207  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:42:56.728401  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:42:56.752971  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:42:56.778598  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 19:42:56.802950  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 19:42:56.827536  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:42:56.851614  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:42:56.876900  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 19:42:56.901555  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 19:42:56.926681  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:42:56.951824  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 19:42:56.977393  470984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:42:56.994580  470984 ssh_runner.go:195] Run: openssl version
	I0819 19:42:57.000345  470984 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0819 19:42:57.000428  470984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 19:42:57.011544  470984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 19:42:57.016559  470984 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 19:42:57.016611  470984 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 19:42:57.016667  470984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 19:42:57.022423  470984 command_runner.go:130] > 3ec20f2e
	I0819 19:42:57.022519  470984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:42:57.032613  470984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:42:57.043983  470984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:42:57.048851  470984 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:42:57.048888  470984 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:42:57.048951  470984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:42:57.054784  470984 command_runner.go:130] > b5213941
	I0819 19:42:57.054879  470984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:42:57.065118  470984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 19:42:57.076449  470984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 19:42:57.081102  470984 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 19:42:57.081170  470984 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 19:42:57.081228  470984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 19:42:57.087325  470984 command_runner.go:130] > 51391683
	I0819 19:42:57.087411  470984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 19:42:57.097254  470984 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:42:57.102107  470984 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:42:57.102147  470984 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0819 19:42:57.102156  470984 command_runner.go:130] > Device: 253,1	Inode: 4197398     Links: 1
	I0819 19:42:57.102164  470984 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 19:42:57.102170  470984 command_runner.go:130] > Access: 2024-08-19 19:36:20.743248210 +0000
	I0819 19:42:57.102175  470984 command_runner.go:130] > Modify: 2024-08-19 19:36:20.743248210 +0000
	I0819 19:42:57.102179  470984 command_runner.go:130] > Change: 2024-08-19 19:36:20.743248210 +0000
	I0819 19:42:57.102184  470984 command_runner.go:130] >  Birth: 2024-08-19 19:36:20.743248210 +0000
	I0819 19:42:57.102242  470984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:42:57.108203  470984 command_runner.go:130] > Certificate will not expire
	I0819 19:42:57.108288  470984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:42:57.114225  470984 command_runner.go:130] > Certificate will not expire
	I0819 19:42:57.114311  470984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:42:57.120181  470984 command_runner.go:130] > Certificate will not expire
	I0819 19:42:57.120280  470984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:42:57.126088  470984 command_runner.go:130] > Certificate will not expire
	I0819 19:42:57.126186  470984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:42:57.132018  470984 command_runner.go:130] > Certificate will not expire
	I0819 19:42:57.132129  470984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:42:57.138676  470984 command_runner.go:130] > Certificate will not expire
	I0819 19:42:57.138933  470984 kubeadm.go:392] StartCluster: {Name:multinode-548379 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-548379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.197 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:42:57.139052  470984 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:42:57.139120  470984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:42:57.176688  470984 command_runner.go:130] > f88e348b3a84742ec396ed84b9260f8708967811698fae9424a4e5c6f73eb9ec
	I0819 19:42:57.176721  470984 command_runner.go:130] > 0adbbf88e8e6f5c8f99f70f167d0be23685b6efe52beb7c096a3d94e98e27772
	I0819 19:42:57.176727  470984 command_runner.go:130] > 3947c41c8021a850328a6ef431f4aa7c273b46da629f27172dd9594e95a309e8
	I0819 19:42:57.176734  470984 command_runner.go:130] > 8cde9d50116f223ef6493538f5ae20918c477628d4127a2ac98182f64446395b
	I0819 19:42:57.176739  470984 command_runner.go:130] > 97b3ea7b8c2ce2841b0ee7f45eca09f3d59b14fbb72dd20976dd9581f6ecc942
	I0819 19:42:57.176744  470984 command_runner.go:130] > e83dd57bfe6d4f267dc1e80968e066a1107866708e1ed336dd94dde65dc97994
	I0819 19:42:57.176750  470984 command_runner.go:130] > 24eff3c1f0c13300bdb7581b9cafdc8c7c9bea7e77c52fc6491f4e3055c7eea1
	I0819 19:42:57.176757  470984 command_runner.go:130] > 0c3794f7593119cdb9a488c4223a840b2b1089549a68b2866400eac7c25f9ec4
	I0819 19:42:57.178456  470984 cri.go:89] found id: "f88e348b3a84742ec396ed84b9260f8708967811698fae9424a4e5c6f73eb9ec"
	I0819 19:42:57.178480  470984 cri.go:89] found id: "0adbbf88e8e6f5c8f99f70f167d0be23685b6efe52beb7c096a3d94e98e27772"
	I0819 19:42:57.178484  470984 cri.go:89] found id: "3947c41c8021a850328a6ef431f4aa7c273b46da629f27172dd9594e95a309e8"
	I0819 19:42:57.178488  470984 cri.go:89] found id: "8cde9d50116f223ef6493538f5ae20918c477628d4127a2ac98182f64446395b"
	I0819 19:42:57.178492  470984 cri.go:89] found id: "97b3ea7b8c2ce2841b0ee7f45eca09f3d59b14fbb72dd20976dd9581f6ecc942"
	I0819 19:42:57.178496  470984 cri.go:89] found id: "e83dd57bfe6d4f267dc1e80968e066a1107866708e1ed336dd94dde65dc97994"
	I0819 19:42:57.178499  470984 cri.go:89] found id: "24eff3c1f0c13300bdb7581b9cafdc8c7c9bea7e77c52fc6491f4e3055c7eea1"
	I0819 19:42:57.178501  470984 cri.go:89] found id: "0c3794f7593119cdb9a488c4223a840b2b1089549a68b2866400eac7c25f9ec4"
	I0819 19:42:57.178505  470984 cri.go:89] found id: ""
	I0819 19:42:57.178562  470984 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 19:44:41 multinode-548379 crio[2728]: time="2024-08-19 19:44:41.926038968Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096681926010156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77013ea3-7b1c-48de-86f0-66a30705b88b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:44:41 multinode-548379 crio[2728]: time="2024-08-19 19:44:41.926585653Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f73f5a4e-a25a-42c4-99ff-2655e669492d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:44:41 multinode-548379 crio[2728]: time="2024-08-19 19:44:41.926658884Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f73f5a4e-a25a-42c4-99ff-2655e669492d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:44:41 multinode-548379 crio[2728]: time="2024-08-19 19:44:41.926989693Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ce06179cd684646b66dbd324528e06932e8762fb7fb07e6495116aafe4a5e5a,PodSandboxId:c9ee51d13730c97f9ae2b716aabb29354754fc5e7229a23aa26fdf52e5558ef4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724096618567373859,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bzhsh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f64d272f-2758-46b2-91d7-bdd17f9eba4b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb22ddb238a00e49806fe2f3b3d3903d48456469a2171875dad0657f770e963,PodSandboxId:e2a1b8fee23ca1e8f926715fbb35a9aec79376b5c595b6efaf721bf7182ab83d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724096585124100451,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tjtx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad1a3fc-dc93-4bb3-812f-2987f9c128f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5526087b87c13c5800d983aaa9ee7dba1126cc9d45ae7a4da130df1907428b5,PodSandboxId:547e595b7b5c01f0e0def61a83c96ff3cf563c3f44dd035bf1ea22bdd26c88ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724096585099078943,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dghqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905fd606-2624-4e3b-8b82-d907f
49c1043,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b1bd21d08ea6058aa3244307ac730803d425e7942459534b2e2f1d8afcf7a5,PodSandboxId:8a448d625b7cefdfb96ba6e560c16df86fb29923713f3b3c62638806b24b995f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724096584941318887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edf76e0-7462-4c8a-9b0a-4081d076142b,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae957bd572857eca22ae7721f8b447ffa030532411fe7768a67d7e9fba77d34f,PodSandboxId:4a044dfbc16cc1b1a23a820838db146a599c773c5067243491256d58db88e7b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724096584892451234,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wwv5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442a87f5-67c8-423e-b61b-e3e397f12878,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946a6f1e520ab3866b797503ca3f72b7e262206f3595d0eafdb8f0bdc25d5845,PodSandboxId:3d6d7ac6cab8d586b365307ed6cbdd5b0f0a5b5504582f04cbccd6f3b1286f0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724096579998215591,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9570f8fb9abb17bf0a0b31add8ce4a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c9bcc4dcef7cee9bcb1ab6f982a11667ff9fb0c9c2cd7290e1c8fa52eac90e,PodSandboxId:caec45dd0098803294ca6fb9f81f93e9a59f18eacdf92503e16ae9ded54ed1fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724096580029318660,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349eec867f2a669372e6320a95834f7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9ef8b8013cdc17033d16419fd14ec8e27d80e9deedb824c413d93e9da4fb06,PodSandboxId:40c11ab920ab3b8b2e434a89638dc8418c8ed954966d4bc37005a4b5dcd47442,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724096580005690714,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d2825e4b75466632f8e142348a9ce49,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d520dc4f2fb7ed6e16b3b726559b7e9fdb6e4422b957094032e868df08609cc5,PodSandboxId:49530a6bcf60e0045013d3f7fde0245a60ecb75818b258ac01eb896c24e95115,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724096579992393873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 447bece717c013f58f18e8a53ade697f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9776d352552c1949c2e922c4b1dbccb4ef607d23ad40fa301057767159a8df7,PodSandboxId:fbb109b13ee5d14f80a7a4664d52d242bbef71efdf0dd882cc60050992ca4e8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724096264085479148,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bzhsh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f64d272f-2758-46b2-91d7-bdd17f9eba4b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88e348b3a84742ec396ed84b9260f8708967811698fae9424a4e5c6f73eb9ec,PodSandboxId:2470ac60e23979596614d3254f07aeb346b230a30af3cc3bc646c124c18ae9db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724096211013464242,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edf76e0-7462-4c8a-9b0a-4081d076142b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0adbbf88e8e6f5c8f99f70f167d0be23685b6efe52beb7c096a3d94e98e27772,PodSandboxId:362dc4a5650b90487f55397ab06653fb47fb8df2f0641963873dfbd1d525c55d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724096211010541482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tjtx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad1a3fc-dc93-4bb3-812f-2987f9c128f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3947c41c8021a850328a6ef431f4aa7c273b46da629f27172dd9594e95a309e8,PodSandboxId:34092c53295429c688907b804397c216002b82eeb57dbb2ecc72b6e1d7a69c00,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724096199493021910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dghqn,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 905fd606-2624-4e3b-8b82-d907f49c1043,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cde9d50116f223ef6493538f5ae20918c477628d4127a2ac98182f64446395b,PodSandboxId:b29008cd62ee35360342dcf8c299f9df4d281b643ef01a34d97363c53e10df40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724096196163904164,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wwv5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 442a87f5-67c8-423e-b61b-e3e397f12878,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e83dd57bfe6d4f267dc1e80968e066a1107866708e1ed336dd94dde65dc97994,PodSandboxId:87e7b3fa3f69e716a3941fc1358c45a3623db4d186825437b12fbd558d518f2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724096183675176161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d
2825e4b75466632f8e142348a9ce49,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3ea7b8c2ce2841b0ee7f45eca09f3d59b14fbb72dd20976dd9581f6ecc942,PodSandboxId:9887b7e5dfb51f294b8ad7b79570292afc80291232d468516e3aa0d8cfa174c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724096183677835635,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9570f8fb9abb17bf0a0b31add8ce4a6,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24eff3c1f0c13300bdb7581b9cafdc8c7c9bea7e77c52fc6491f4e3055c7eea1,PodSandboxId:688f4e7ce5e29c81c6cab2795297896410a965465b22c91ae0c5f4f711c4a43a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724096183619481989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 447bece717c013f58f18e8a53ade697f,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3794f7593119cdb9a488c4223a840b2b1089549a68b2866400eac7c25f9ec4,PodSandboxId:87ef372961342fafea467cb8be7006d850b5c92cfdf6baeccd5c98386a23d1e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724096183592835371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349eec867f2a669372e6320a95834f7,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f73f5a4e-a25a-42c4-99ff-2655e669492d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:44:41 multinode-548379 crio[2728]: time="2024-08-19 19:44:41.969465106Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=523a2b43-1873-494a-87ae-b7a6d04a13df name=/runtime.v1.RuntimeService/Version
	Aug 19 19:44:41 multinode-548379 crio[2728]: time="2024-08-19 19:44:41.969557710Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=523a2b43-1873-494a-87ae-b7a6d04a13df name=/runtime.v1.RuntimeService/Version
	Aug 19 19:44:41 multinode-548379 crio[2728]: time="2024-08-19 19:44:41.970695253Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f40cdb9b-f082-4882-ae66-75cf2b37486f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:44:41 multinode-548379 crio[2728]: time="2024-08-19 19:44:41.971262172Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096681971084173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f40cdb9b-f082-4882-ae66-75cf2b37486f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:44:41 multinode-548379 crio[2728]: time="2024-08-19 19:44:41.971746313Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0cc4f97e-27b5-4c79-8094-b4ba828cc7e9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:44:41 multinode-548379 crio[2728]: time="2024-08-19 19:44:41.971815698Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0cc4f97e-27b5-4c79-8094-b4ba828cc7e9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:44:41 multinode-548379 crio[2728]: time="2024-08-19 19:44:41.972318591Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ce06179cd684646b66dbd324528e06932e8762fb7fb07e6495116aafe4a5e5a,PodSandboxId:c9ee51d13730c97f9ae2b716aabb29354754fc5e7229a23aa26fdf52e5558ef4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724096618567373859,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bzhsh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f64d272f-2758-46b2-91d7-bdd17f9eba4b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb22ddb238a00e49806fe2f3b3d3903d48456469a2171875dad0657f770e963,PodSandboxId:e2a1b8fee23ca1e8f926715fbb35a9aec79376b5c595b6efaf721bf7182ab83d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724096585124100451,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tjtx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad1a3fc-dc93-4bb3-812f-2987f9c128f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5526087b87c13c5800d983aaa9ee7dba1126cc9d45ae7a4da130df1907428b5,PodSandboxId:547e595b7b5c01f0e0def61a83c96ff3cf563c3f44dd035bf1ea22bdd26c88ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724096585099078943,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dghqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905fd606-2624-4e3b-8b82-d907f
49c1043,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b1bd21d08ea6058aa3244307ac730803d425e7942459534b2e2f1d8afcf7a5,PodSandboxId:8a448d625b7cefdfb96ba6e560c16df86fb29923713f3b3c62638806b24b995f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724096584941318887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edf76e0-7462-4c8a-9b0a-4081d076142b,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae957bd572857eca22ae7721f8b447ffa030532411fe7768a67d7e9fba77d34f,PodSandboxId:4a044dfbc16cc1b1a23a820838db146a599c773c5067243491256d58db88e7b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724096584892451234,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wwv5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442a87f5-67c8-423e-b61b-e3e397f12878,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946a6f1e520ab3866b797503ca3f72b7e262206f3595d0eafdb8f0bdc25d5845,PodSandboxId:3d6d7ac6cab8d586b365307ed6cbdd5b0f0a5b5504582f04cbccd6f3b1286f0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724096579998215591,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9570f8fb9abb17bf0a0b31add8ce4a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c9bcc4dcef7cee9bcb1ab6f982a11667ff9fb0c9c2cd7290e1c8fa52eac90e,PodSandboxId:caec45dd0098803294ca6fb9f81f93e9a59f18eacdf92503e16ae9ded54ed1fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724096580029318660,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349eec867f2a669372e6320a95834f7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9ef8b8013cdc17033d16419fd14ec8e27d80e9deedb824c413d93e9da4fb06,PodSandboxId:40c11ab920ab3b8b2e434a89638dc8418c8ed954966d4bc37005a4b5dcd47442,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724096580005690714,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d2825e4b75466632f8e142348a9ce49,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d520dc4f2fb7ed6e16b3b726559b7e9fdb6e4422b957094032e868df08609cc5,PodSandboxId:49530a6bcf60e0045013d3f7fde0245a60ecb75818b258ac01eb896c24e95115,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724096579992393873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 447bece717c013f58f18e8a53ade697f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9776d352552c1949c2e922c4b1dbccb4ef607d23ad40fa301057767159a8df7,PodSandboxId:fbb109b13ee5d14f80a7a4664d52d242bbef71efdf0dd882cc60050992ca4e8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724096264085479148,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bzhsh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f64d272f-2758-46b2-91d7-bdd17f9eba4b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88e348b3a84742ec396ed84b9260f8708967811698fae9424a4e5c6f73eb9ec,PodSandboxId:2470ac60e23979596614d3254f07aeb346b230a30af3cc3bc646c124c18ae9db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724096211013464242,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edf76e0-7462-4c8a-9b0a-4081d076142b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0adbbf88e8e6f5c8f99f70f167d0be23685b6efe52beb7c096a3d94e98e27772,PodSandboxId:362dc4a5650b90487f55397ab06653fb47fb8df2f0641963873dfbd1d525c55d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724096211010541482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tjtx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad1a3fc-dc93-4bb3-812f-2987f9c128f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3947c41c8021a850328a6ef431f4aa7c273b46da629f27172dd9594e95a309e8,PodSandboxId:34092c53295429c688907b804397c216002b82eeb57dbb2ecc72b6e1d7a69c00,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724096199493021910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dghqn,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 905fd606-2624-4e3b-8b82-d907f49c1043,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cde9d50116f223ef6493538f5ae20918c477628d4127a2ac98182f64446395b,PodSandboxId:b29008cd62ee35360342dcf8c299f9df4d281b643ef01a34d97363c53e10df40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724096196163904164,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wwv5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 442a87f5-67c8-423e-b61b-e3e397f12878,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e83dd57bfe6d4f267dc1e80968e066a1107866708e1ed336dd94dde65dc97994,PodSandboxId:87e7b3fa3f69e716a3941fc1358c45a3623db4d186825437b12fbd558d518f2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724096183675176161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d
2825e4b75466632f8e142348a9ce49,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3ea7b8c2ce2841b0ee7f45eca09f3d59b14fbb72dd20976dd9581f6ecc942,PodSandboxId:9887b7e5dfb51f294b8ad7b79570292afc80291232d468516e3aa0d8cfa174c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724096183677835635,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9570f8fb9abb17bf0a0b31add8ce4a6,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24eff3c1f0c13300bdb7581b9cafdc8c7c9bea7e77c52fc6491f4e3055c7eea1,PodSandboxId:688f4e7ce5e29c81c6cab2795297896410a965465b22c91ae0c5f4f711c4a43a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724096183619481989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 447bece717c013f58f18e8a53ade697f,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3794f7593119cdb9a488c4223a840b2b1089549a68b2866400eac7c25f9ec4,PodSandboxId:87ef372961342fafea467cb8be7006d850b5c92cfdf6baeccd5c98386a23d1e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724096183592835371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349eec867f2a669372e6320a95834f7,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0cc4f97e-27b5-4c79-8094-b4ba828cc7e9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:44:42 multinode-548379 crio[2728]: time="2024-08-19 19:44:42.012681094Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=560d57af-70db-450e-8ac3-74064ae8cc3f name=/runtime.v1.RuntimeService/Version
	Aug 19 19:44:42 multinode-548379 crio[2728]: time="2024-08-19 19:44:42.012758619Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=560d57af-70db-450e-8ac3-74064ae8cc3f name=/runtime.v1.RuntimeService/Version
	Aug 19 19:44:42 multinode-548379 crio[2728]: time="2024-08-19 19:44:42.014254744Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68994d8c-f87d-4dec-92ed-5109dd0958ef name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:44:42 multinode-548379 crio[2728]: time="2024-08-19 19:44:42.014711644Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096682014687277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68994d8c-f87d-4dec-92ed-5109dd0958ef name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:44:42 multinode-548379 crio[2728]: time="2024-08-19 19:44:42.015312458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b46a0c84-cfd2-4312-8a5c-d4bd892cca7c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:44:42 multinode-548379 crio[2728]: time="2024-08-19 19:44:42.015366761Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b46a0c84-cfd2-4312-8a5c-d4bd892cca7c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:44:42 multinode-548379 crio[2728]: time="2024-08-19 19:44:42.015692637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ce06179cd684646b66dbd324528e06932e8762fb7fb07e6495116aafe4a5e5a,PodSandboxId:c9ee51d13730c97f9ae2b716aabb29354754fc5e7229a23aa26fdf52e5558ef4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724096618567373859,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bzhsh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f64d272f-2758-46b2-91d7-bdd17f9eba4b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb22ddb238a00e49806fe2f3b3d3903d48456469a2171875dad0657f770e963,PodSandboxId:e2a1b8fee23ca1e8f926715fbb35a9aec79376b5c595b6efaf721bf7182ab83d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724096585124100451,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tjtx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad1a3fc-dc93-4bb3-812f-2987f9c128f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5526087b87c13c5800d983aaa9ee7dba1126cc9d45ae7a4da130df1907428b5,PodSandboxId:547e595b7b5c01f0e0def61a83c96ff3cf563c3f44dd035bf1ea22bdd26c88ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724096585099078943,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dghqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905fd606-2624-4e3b-8b82-d907f
49c1043,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b1bd21d08ea6058aa3244307ac730803d425e7942459534b2e2f1d8afcf7a5,PodSandboxId:8a448d625b7cefdfb96ba6e560c16df86fb29923713f3b3c62638806b24b995f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724096584941318887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edf76e0-7462-4c8a-9b0a-4081d076142b,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae957bd572857eca22ae7721f8b447ffa030532411fe7768a67d7e9fba77d34f,PodSandboxId:4a044dfbc16cc1b1a23a820838db146a599c773c5067243491256d58db88e7b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724096584892451234,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wwv5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442a87f5-67c8-423e-b61b-e3e397f12878,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946a6f1e520ab3866b797503ca3f72b7e262206f3595d0eafdb8f0bdc25d5845,PodSandboxId:3d6d7ac6cab8d586b365307ed6cbdd5b0f0a5b5504582f04cbccd6f3b1286f0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724096579998215591,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9570f8fb9abb17bf0a0b31add8ce4a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c9bcc4dcef7cee9bcb1ab6f982a11667ff9fb0c9c2cd7290e1c8fa52eac90e,PodSandboxId:caec45dd0098803294ca6fb9f81f93e9a59f18eacdf92503e16ae9ded54ed1fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724096580029318660,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349eec867f2a669372e6320a95834f7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9ef8b8013cdc17033d16419fd14ec8e27d80e9deedb824c413d93e9da4fb06,PodSandboxId:40c11ab920ab3b8b2e434a89638dc8418c8ed954966d4bc37005a4b5dcd47442,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724096580005690714,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d2825e4b75466632f8e142348a9ce49,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d520dc4f2fb7ed6e16b3b726559b7e9fdb6e4422b957094032e868df08609cc5,PodSandboxId:49530a6bcf60e0045013d3f7fde0245a60ecb75818b258ac01eb896c24e95115,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724096579992393873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 447bece717c013f58f18e8a53ade697f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9776d352552c1949c2e922c4b1dbccb4ef607d23ad40fa301057767159a8df7,PodSandboxId:fbb109b13ee5d14f80a7a4664d52d242bbef71efdf0dd882cc60050992ca4e8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724096264085479148,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bzhsh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f64d272f-2758-46b2-91d7-bdd17f9eba4b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88e348b3a84742ec396ed84b9260f8708967811698fae9424a4e5c6f73eb9ec,PodSandboxId:2470ac60e23979596614d3254f07aeb346b230a30af3cc3bc646c124c18ae9db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724096211013464242,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edf76e0-7462-4c8a-9b0a-4081d076142b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0adbbf88e8e6f5c8f99f70f167d0be23685b6efe52beb7c096a3d94e98e27772,PodSandboxId:362dc4a5650b90487f55397ab06653fb47fb8df2f0641963873dfbd1d525c55d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724096211010541482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tjtx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad1a3fc-dc93-4bb3-812f-2987f9c128f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3947c41c8021a850328a6ef431f4aa7c273b46da629f27172dd9594e95a309e8,PodSandboxId:34092c53295429c688907b804397c216002b82eeb57dbb2ecc72b6e1d7a69c00,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724096199493021910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dghqn,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 905fd606-2624-4e3b-8b82-d907f49c1043,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cde9d50116f223ef6493538f5ae20918c477628d4127a2ac98182f64446395b,PodSandboxId:b29008cd62ee35360342dcf8c299f9df4d281b643ef01a34d97363c53e10df40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724096196163904164,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wwv5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 442a87f5-67c8-423e-b61b-e3e397f12878,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e83dd57bfe6d4f267dc1e80968e066a1107866708e1ed336dd94dde65dc97994,PodSandboxId:87e7b3fa3f69e716a3941fc1358c45a3623db4d186825437b12fbd558d518f2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724096183675176161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d
2825e4b75466632f8e142348a9ce49,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3ea7b8c2ce2841b0ee7f45eca09f3d59b14fbb72dd20976dd9581f6ecc942,PodSandboxId:9887b7e5dfb51f294b8ad7b79570292afc80291232d468516e3aa0d8cfa174c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724096183677835635,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9570f8fb9abb17bf0a0b31add8ce4a6,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24eff3c1f0c13300bdb7581b9cafdc8c7c9bea7e77c52fc6491f4e3055c7eea1,PodSandboxId:688f4e7ce5e29c81c6cab2795297896410a965465b22c91ae0c5f4f711c4a43a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724096183619481989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 447bece717c013f58f18e8a53ade697f,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3794f7593119cdb9a488c4223a840b2b1089549a68b2866400eac7c25f9ec4,PodSandboxId:87ef372961342fafea467cb8be7006d850b5c92cfdf6baeccd5c98386a23d1e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724096183592835371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349eec867f2a669372e6320a95834f7,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b46a0c84-cfd2-4312-8a5c-d4bd892cca7c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:44:42 multinode-548379 crio[2728]: time="2024-08-19 19:44:42.062065681Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d5cf1d6-5770-4c1a-9146-23ace6b47953 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:44:42 multinode-548379 crio[2728]: time="2024-08-19 19:44:42.062174062Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d5cf1d6-5770-4c1a-9146-23ace6b47953 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:44:42 multinode-548379 crio[2728]: time="2024-08-19 19:44:42.063189722Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5874d806-f449-4a7e-9b91-b728639783ab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:44:42 multinode-548379 crio[2728]: time="2024-08-19 19:44:42.063595598Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096682063573245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5874d806-f449-4a7e-9b91-b728639783ab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:44:42 multinode-548379 crio[2728]: time="2024-08-19 19:44:42.064206076Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65ba7ac7-4940-4370-91f3-12ada68ed156 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:44:42 multinode-548379 crio[2728]: time="2024-08-19 19:44:42.064262775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65ba7ac7-4940-4370-91f3-12ada68ed156 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:44:42 multinode-548379 crio[2728]: time="2024-08-19 19:44:42.064589835Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ce06179cd684646b66dbd324528e06932e8762fb7fb07e6495116aafe4a5e5a,PodSandboxId:c9ee51d13730c97f9ae2b716aabb29354754fc5e7229a23aa26fdf52e5558ef4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724096618567373859,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bzhsh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f64d272f-2758-46b2-91d7-bdd17f9eba4b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb22ddb238a00e49806fe2f3b3d3903d48456469a2171875dad0657f770e963,PodSandboxId:e2a1b8fee23ca1e8f926715fbb35a9aec79376b5c595b6efaf721bf7182ab83d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724096585124100451,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tjtx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad1a3fc-dc93-4bb3-812f-2987f9c128f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5526087b87c13c5800d983aaa9ee7dba1126cc9d45ae7a4da130df1907428b5,PodSandboxId:547e595b7b5c01f0e0def61a83c96ff3cf563c3f44dd035bf1ea22bdd26c88ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724096585099078943,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dghqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905fd606-2624-4e3b-8b82-d907f
49c1043,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b1bd21d08ea6058aa3244307ac730803d425e7942459534b2e2f1d8afcf7a5,PodSandboxId:8a448d625b7cefdfb96ba6e560c16df86fb29923713f3b3c62638806b24b995f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724096584941318887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edf76e0-7462-4c8a-9b0a-4081d076142b,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae957bd572857eca22ae7721f8b447ffa030532411fe7768a67d7e9fba77d34f,PodSandboxId:4a044dfbc16cc1b1a23a820838db146a599c773c5067243491256d58db88e7b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724096584892451234,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wwv5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442a87f5-67c8-423e-b61b-e3e397f12878,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946a6f1e520ab3866b797503ca3f72b7e262206f3595d0eafdb8f0bdc25d5845,PodSandboxId:3d6d7ac6cab8d586b365307ed6cbdd5b0f0a5b5504582f04cbccd6f3b1286f0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724096579998215591,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9570f8fb9abb17bf0a0b31add8ce4a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c9bcc4dcef7cee9bcb1ab6f982a11667ff9fb0c9c2cd7290e1c8fa52eac90e,PodSandboxId:caec45dd0098803294ca6fb9f81f93e9a59f18eacdf92503e16ae9ded54ed1fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724096580029318660,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349eec867f2a669372e6320a95834f7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9ef8b8013cdc17033d16419fd14ec8e27d80e9deedb824c413d93e9da4fb06,PodSandboxId:40c11ab920ab3b8b2e434a89638dc8418c8ed954966d4bc37005a4b5dcd47442,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724096580005690714,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d2825e4b75466632f8e142348a9ce49,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d520dc4f2fb7ed6e16b3b726559b7e9fdb6e4422b957094032e868df08609cc5,PodSandboxId:49530a6bcf60e0045013d3f7fde0245a60ecb75818b258ac01eb896c24e95115,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724096579992393873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 447bece717c013f58f18e8a53ade697f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9776d352552c1949c2e922c4b1dbccb4ef607d23ad40fa301057767159a8df7,PodSandboxId:fbb109b13ee5d14f80a7a4664d52d242bbef71efdf0dd882cc60050992ca4e8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724096264085479148,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bzhsh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f64d272f-2758-46b2-91d7-bdd17f9eba4b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88e348b3a84742ec396ed84b9260f8708967811698fae9424a4e5c6f73eb9ec,PodSandboxId:2470ac60e23979596614d3254f07aeb346b230a30af3cc3bc646c124c18ae9db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724096211013464242,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edf76e0-7462-4c8a-9b0a-4081d076142b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0adbbf88e8e6f5c8f99f70f167d0be23685b6efe52beb7c096a3d94e98e27772,PodSandboxId:362dc4a5650b90487f55397ab06653fb47fb8df2f0641963873dfbd1d525c55d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724096211010541482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tjtx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad1a3fc-dc93-4bb3-812f-2987f9c128f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3947c41c8021a850328a6ef431f4aa7c273b46da629f27172dd9594e95a309e8,PodSandboxId:34092c53295429c688907b804397c216002b82eeb57dbb2ecc72b6e1d7a69c00,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724096199493021910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dghqn,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 905fd606-2624-4e3b-8b82-d907f49c1043,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cde9d50116f223ef6493538f5ae20918c477628d4127a2ac98182f64446395b,PodSandboxId:b29008cd62ee35360342dcf8c299f9df4d281b643ef01a34d97363c53e10df40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724096196163904164,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wwv5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 442a87f5-67c8-423e-b61b-e3e397f12878,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e83dd57bfe6d4f267dc1e80968e066a1107866708e1ed336dd94dde65dc97994,PodSandboxId:87e7b3fa3f69e716a3941fc1358c45a3623db4d186825437b12fbd558d518f2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724096183675176161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d
2825e4b75466632f8e142348a9ce49,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3ea7b8c2ce2841b0ee7f45eca09f3d59b14fbb72dd20976dd9581f6ecc942,PodSandboxId:9887b7e5dfb51f294b8ad7b79570292afc80291232d468516e3aa0d8cfa174c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724096183677835635,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9570f8fb9abb17bf0a0b31add8ce4a6,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24eff3c1f0c13300bdb7581b9cafdc8c7c9bea7e77c52fc6491f4e3055c7eea1,PodSandboxId:688f4e7ce5e29c81c6cab2795297896410a965465b22c91ae0c5f4f711c4a43a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724096183619481989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 447bece717c013f58f18e8a53ade697f,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3794f7593119cdb9a488c4223a840b2b1089549a68b2866400eac7c25f9ec4,PodSandboxId:87ef372961342fafea467cb8be7006d850b5c92cfdf6baeccd5c98386a23d1e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724096183592835371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349eec867f2a669372e6320a95834f7,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65ba7ac7-4940-4370-91f3-12ada68ed156 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7ce06179cd684       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   c9ee51d13730c       busybox-7dff88458-bzhsh
	eeb22ddb238a0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   e2a1b8fee23ca       coredns-6f6b679f8f-tjtx5
	e5526087b87c1       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   547e595b7b5c0       kindnet-dghqn
	66b1bd21d08ea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   8a448d625b7ce       storage-provisioner
	ae957bd572857       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   4a044dfbc16cc       kube-proxy-wwv5c
	15c9bcc4dcef7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   caec45dd00988       kube-apiserver-multinode-548379
	9d9ef8b8013cd       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   40c11ab920ab3       kube-scheduler-multinode-548379
	946a6f1e520ab       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   3d6d7ac6cab8d       etcd-multinode-548379
	d520dc4f2fb7e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   49530a6bcf60e       kube-controller-manager-multinode-548379
	e9776d352552c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   fbb109b13ee5d       busybox-7dff88458-bzhsh
	f88e348b3a847       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   2470ac60e2397       storage-provisioner
	0adbbf88e8e6f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   362dc4a5650b9       coredns-6f6b679f8f-tjtx5
	3947c41c8021a       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   34092c5329542       kindnet-dghqn
	8cde9d50116f2       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   b29008cd62ee3       kube-proxy-wwv5c
	97b3ea7b8c2ce       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   9887b7e5dfb51       etcd-multinode-548379
	e83dd57bfe6d4       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   87e7b3fa3f69e       kube-scheduler-multinode-548379
	24eff3c1f0c13       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   688f4e7ce5e29       kube-controller-manager-multinode-548379
	0c3794f759311       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   87ef372961342       kube-apiserver-multinode-548379
	
	
	==> coredns [0adbbf88e8e6f5c8f99f70f167d0be23685b6efe52beb7c096a3d94e98e27772] <==
	[INFO] 10.244.1.2:53626 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002159167s
	[INFO] 10.244.1.2:56073 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088712s
	[INFO] 10.244.1.2:59545 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067922s
	[INFO] 10.244.1.2:42139 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001637555s
	[INFO] 10.244.1.2:34970 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064038s
	[INFO] 10.244.1.2:39551 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000141993s
	[INFO] 10.244.1.2:51608 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073165s
	[INFO] 10.244.0.3:50563 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000073873s
	[INFO] 10.244.0.3:52236 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000040179s
	[INFO] 10.244.0.3:58501 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000035598s
	[INFO] 10.244.0.3:43640 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000027295s
	[INFO] 10.244.1.2:49058 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169808s
	[INFO] 10.244.1.2:43243 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090393s
	[INFO] 10.244.1.2:52058 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097467s
	[INFO] 10.244.1.2:55033 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080488s
	[INFO] 10.244.0.3:43467 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112671s
	[INFO] 10.244.0.3:52651 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000099421s
	[INFO] 10.244.0.3:44060 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079557s
	[INFO] 10.244.0.3:38985 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111723s
	[INFO] 10.244.1.2:55909 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121602s
	[INFO] 10.244.1.2:34912 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116408s
	[INFO] 10.244.1.2:37941 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000073106s
	[INFO] 10.244.1.2:40067 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082935s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [eeb22ddb238a00e49806fe2f3b3d3903d48456469a2171875dad0657f770e963] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38906 - 19609 "HINFO IN 8183877632649629386.8280303456427227633. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021951872s
	
	
	==> describe nodes <==
	Name:               multinode-548379
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-548379
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=multinode-548379
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T19_36_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:36:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-548379
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:44:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:43:02 +0000   Mon, 19 Aug 2024 19:36:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:43:02 +0000   Mon, 19 Aug 2024 19:36:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:43:02 +0000   Mon, 19 Aug 2024 19:36:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:43:02 +0000   Mon, 19 Aug 2024 19:36:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.35
	  Hostname:    multinode-548379
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9159c9db44cd4d7da4cdf638769b739e
	  System UUID:                9159c9db-44cd-4d7d-a4cd-f638769b739e
	  Boot ID:                    b72926b1-7c78-4bb8-8dd8-4c1656ba65cc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bzhsh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m2s
	  kube-system                 coredns-6f6b679f8f-tjtx5                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m8s
	  kube-system                 etcd-multinode-548379                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m13s
	  kube-system                 kindnet-dghqn                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m9s
	  kube-system                 kube-apiserver-multinode-548379             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 kube-controller-manager-multinode-548379    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 kube-proxy-wwv5c                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m9s
	  kube-system                 kube-scheduler-multinode-548379             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m5s                   kube-proxy       
	  Normal  Starting                 97s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  8m19s (x8 over 8m19s)  kubelet          Node multinode-548379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m19s (x8 over 8m19s)  kubelet          Node multinode-548379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m19s (x7 over 8m19s)  kubelet          Node multinode-548379 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m14s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m13s                  kubelet          Node multinode-548379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m13s                  kubelet          Node multinode-548379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m13s                  kubelet          Node multinode-548379 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m9s                   node-controller  Node multinode-548379 event: Registered Node multinode-548379 in Controller
	  Normal  NodeReady                7m52s                  kubelet          Node multinode-548379 status is now: NodeReady
	  Normal  Starting                 103s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s (x8 over 103s)    kubelet          Node multinode-548379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 103s)    kubelet          Node multinode-548379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x7 over 103s)    kubelet          Node multinode-548379 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           96s                    node-controller  Node multinode-548379 event: Registered Node multinode-548379 in Controller
	
	
	Name:               multinode-548379-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-548379-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=multinode-548379
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T19_43_44_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:43:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-548379-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:44:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:44:14 +0000   Mon, 19 Aug 2024 19:43:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:44:14 +0000   Mon, 19 Aug 2024 19:43:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:44:14 +0000   Mon, 19 Aug 2024 19:43:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:44:14 +0000   Mon, 19 Aug 2024 19:44:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.133
	  Hostname:    multinode-548379-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9eca4535221b4296ac8c5a4d710f7f12
	  System UUID:                9eca4535-221b-4296-ac8c-5a4d710f7f12
	  Boot ID:                    ac1cf6cd-7037-4df9-a152-b5f965c742f3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-df4jm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kindnet-pwhrw              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m24s
	  kube-system                 kube-proxy-knvbd           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m18s                  kube-proxy  
	  Normal  Starting                 54s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m24s (x2 over 7m24s)  kubelet     Node multinode-548379-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m24s (x2 over 7m24s)  kubelet     Node multinode-548379-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m24s (x2 over 7m24s)  kubelet     Node multinode-548379-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m24s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m4s                   kubelet     Node multinode-548379-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  59s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  58s (x2 over 59s)      kubelet     Node multinode-548379-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x2 over 59s)      kubelet     Node multinode-548379-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x2 over 59s)      kubelet     Node multinode-548379-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                40s                    kubelet     Node multinode-548379-m02 status is now: NodeReady
	
	
	Name:               multinode-548379-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-548379-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=multinode-548379
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T19_44_21_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:44:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-548379-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:44:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:44:39 +0000   Mon, 19 Aug 2024 19:44:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:44:39 +0000   Mon, 19 Aug 2024 19:44:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:44:39 +0000   Mon, 19 Aug 2024 19:44:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:44:39 +0000   Mon, 19 Aug 2024 19:44:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.197
	  Hostname:    multinode-548379-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f5a42bc3353488fbc1de231504fd7e3
	  System UUID:                7f5a42bc-3353-488f-bc1d-e231504fd7e3
	  Boot ID:                    3865b7aa-8b16-44fb-9e98-e0db7307d1af
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-tq6d4       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m31s
	  kube-system                 kube-proxy-tlpv4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m38s                  kube-proxy       
	  Normal  Starting                 17s                    kube-proxy       
	  Normal  Starting                 6m26s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m31s (x2 over 6m31s)  kubelet          Node multinode-548379-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m31s (x2 over 6m31s)  kubelet          Node multinode-548379-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m31s (x2 over 6m31s)  kubelet          Node multinode-548379-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m12s                  kubelet          Node multinode-548379-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m43s (x2 over 5m43s)  kubelet          Node multinode-548379-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m43s (x2 over 5m43s)  kubelet          Node multinode-548379-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m43s (x2 over 5m43s)  kubelet          Node multinode-548379-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m24s                  kubelet          Node multinode-548379-m03 status is now: NodeReady
	  Normal  CIDRAssignmentFailed     21s                    cidrAllocator    Node multinode-548379-m03 status is now: CIDRAssignmentFailed
	  Normal  Starting                 21s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x2 over 21s)      kubelet          Node multinode-548379-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x2 over 21s)      kubelet          Node multinode-548379-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x2 over 21s)      kubelet          Node multinode-548379-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                    node-controller  Node multinode-548379-m03 event: Registered Node multinode-548379-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-548379-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.059633] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.178731] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.155996] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.282527] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +4.067585] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +3.753020] systemd-fstab-generator[891]: Ignoring "noauto" option for root device
	[  +0.065275] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.988151] systemd-fstab-generator[1225]: Ignoring "noauto" option for root device
	[  +0.076971] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.621058] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.532343] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.649446] kauditd_printk_skb: 32 callbacks suppressed
	[Aug19 19:37] kauditd_printk_skb: 14 callbacks suppressed
	[Aug19 19:42] systemd-fstab-generator[2648]: Ignoring "noauto" option for root device
	[  +0.185713] systemd-fstab-generator[2660]: Ignoring "noauto" option for root device
	[  +0.192391] systemd-fstab-generator[2674]: Ignoring "noauto" option for root device
	[  +0.146597] systemd-fstab-generator[2686]: Ignoring "noauto" option for root device
	[  +0.282321] systemd-fstab-generator[2714]: Ignoring "noauto" option for root device
	[  +0.729315] systemd-fstab-generator[2813]: Ignoring "noauto" option for root device
	[  +2.526523] systemd-fstab-generator[2935]: Ignoring "noauto" option for root device
	[  +1.059965] kauditd_printk_skb: 179 callbacks suppressed
	[Aug19 19:43] kauditd_printk_skb: 25 callbacks suppressed
	[ +14.480559] systemd-fstab-generator[3792]: Ignoring "noauto" option for root device
	[  +0.089734] kauditd_printk_skb: 6 callbacks suppressed
	[ +18.780510] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [946a6f1e520ab3866b797503ca3f72b7e262206f3595d0eafdb8f0bdc25d5845] <==
	{"level":"info","ts":"2024-08-19T19:43:00.429574Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"45f5838de4bd43f1","local-member-id":"732232f81d76e930","added-peer-id":"732232f81d76e930","added-peer-peer-urls":["https://192.168.39.35:2380"]}
	{"level":"info","ts":"2024-08-19T19:43:00.429823Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"45f5838de4bd43f1","local-member-id":"732232f81d76e930","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:43:00.429879Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:43:00.442547Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:43:00.447433Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T19:43:00.447654Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"732232f81d76e930","initial-advertise-peer-urls":["https://192.168.39.35:2380"],"listen-peer-urls":["https://192.168.39.35:2380"],"advertise-client-urls":["https://192.168.39.35:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.35:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T19:43:00.447710Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T19:43:00.447829Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.35:2380"}
	{"level":"info","ts":"2024-08-19T19:43:00.447852Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.35:2380"}
	{"level":"info","ts":"2024-08-19T19:43:01.373204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T19:43:01.373320Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T19:43:01.373375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 received MsgPreVoteResp from 732232f81d76e930 at term 2"}
	{"level":"info","ts":"2024-08-19T19:43:01.373418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T19:43:01.373443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 received MsgVoteResp from 732232f81d76e930 at term 3"}
	{"level":"info","ts":"2024-08-19T19:43:01.373472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 became leader at term 3"}
	{"level":"info","ts":"2024-08-19T19:43:01.373497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 732232f81d76e930 elected leader 732232f81d76e930 at term 3"}
	{"level":"info","ts":"2024-08-19T19:43:01.381299Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"732232f81d76e930","local-member-attributes":"{Name:multinode-548379 ClientURLs:[https://192.168.39.35:2379]}","request-path":"/0/members/732232f81d76e930/attributes","cluster-id":"45f5838de4bd43f1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T19:43:01.381465Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:43:01.381763Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:43:01.382421Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:43:01.383208Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.35:2379"}
	{"level":"info","ts":"2024-08-19T19:43:01.383688Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:43:01.384437Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T19:43:01.384507Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T19:43:01.384532Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [97b3ea7b8c2ce2841b0ee7f45eca09f3d59b14fbb72dd20976dd9581f6ecc942] <==
	{"level":"info","ts":"2024-08-19T19:37:18.751923Z","caller":"traceutil/trace.go:171","msg":"trace[1691852702] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"233.544364ms","start":"2024-08-19T19:37:18.518365Z","end":"2024-08-19T19:37:18.751909Z","steps":["trace[1691852702] 'process raft request'  (duration: 116.141684ms)","trace[1691852702] 'compare'  (duration: 116.660454ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T19:38:11.834338Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.118202ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16803090103365747120 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-548379-m03.17ed38714b386237\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-548379-m03.17ed38714b386237\" value_size:642 lease:7579718066510970918 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-19T19:38:11.834710Z","caller":"traceutil/trace.go:171","msg":"trace[1198149453] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"226.831835ms","start":"2024-08-19T19:38:11.607848Z","end":"2024-08-19T19:38:11.834679Z","steps":["trace[1198149453] 'process raft request'  (duration: 126.3313ms)","trace[1198149453] 'compare'  (duration: 100.0226ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T19:38:18.899806Z","caller":"traceutil/trace.go:171","msg":"trace[1678700882] transaction","detail":"{read_only:false; response_revision:650; number_of_response:1; }","duration":"213.310315ms","start":"2024-08-19T19:38:18.686483Z","end":"2024-08-19T19:38:18.899793Z","steps":["trace[1678700882] 'process raft request'  (duration: 213.219935ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T19:38:21.798632Z","caller":"traceutil/trace.go:171","msg":"trace[134708294] linearizableReadLoop","detail":"{readStateIndex:693; appliedIndex:692; }","duration":"144.39406ms","start":"2024-08-19T19:38:21.654209Z","end":"2024-08-19T19:38:21.798603Z","steps":["trace[134708294] 'read index received'  (duration: 144.256156ms)","trace[134708294] 'applied index is now lower than readState.Index'  (duration: 137.383µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T19:38:21.798782Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.550832ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-548379-m03\" ","response":"range_response_count:1 size:2887"}
	{"level":"info","ts":"2024-08-19T19:38:21.798859Z","caller":"traceutil/trace.go:171","msg":"trace[402560877] range","detail":"{range_begin:/registry/minions/multinode-548379-m03; range_end:; response_count:1; response_revision:659; }","duration":"144.638202ms","start":"2024-08-19T19:38:21.654205Z","end":"2024-08-19T19:38:21.798843Z","steps":["trace[402560877] 'agreement among raft nodes before linearized reading'  (duration: 144.488224ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T19:38:21.799039Z","caller":"traceutil/trace.go:171","msg":"trace[2090916661] transaction","detail":"{read_only:false; response_revision:659; number_of_response:1; }","duration":"150.75469ms","start":"2024-08-19T19:38:21.648271Z","end":"2024-08-19T19:38:21.799026Z","steps":["trace[2090916661] 'process raft request'  (duration: 150.212825ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:38:22.288052Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.31546ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16803090103365747261 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/multinode-548379-m03\" mod_revision:637 > success:<request_put:<key:\"/registry/minions/multinode-548379-m03\" value_size:3127 >> failure:<request_range:<key:\"/registry/minions/multinode-548379-m03\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-19T19:38:22.288208Z","caller":"traceutil/trace.go:171","msg":"trace[16616940] linearizableReadLoop","detail":"{readStateIndex:695; appliedIndex:694; }","duration":"194.870417ms","start":"2024-08-19T19:38:22.093326Z","end":"2024-08-19T19:38:22.288196Z","steps":["trace[16616940] 'read index received'  (duration: 60.029687ms)","trace[16616940] 'applied index is now lower than readState.Index'  (duration: 134.839368ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T19:38:22.288284Z","caller":"traceutil/trace.go:171","msg":"trace[882393299] transaction","detail":"{read_only:false; response_revision:661; number_of_response:1; }","duration":"297.358467ms","start":"2024-08-19T19:38:21.990917Z","end":"2024-08-19T19:38:22.288276Z","steps":["trace[882393299] 'process raft request'  (duration: 162.524211ms)","trace[882393299] 'compare'  (duration: 134.222847ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T19:38:22.288509Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.170545ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-19T19:38:22.288550Z","caller":"traceutil/trace.go:171","msg":"trace[1894142840] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:661; }","duration":"195.223443ms","start":"2024-08-19T19:38:22.093320Z","end":"2024-08-19T19:38:22.288544Z","steps":["trace[1894142840] 'agreement among raft nodes before linearized reading'  (duration: 195.113908ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:38:22.288664Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.403836ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-548379-m03\" ","response":"range_response_count:1 size:3188"}
	{"level":"info","ts":"2024-08-19T19:38:22.288694Z","caller":"traceutil/trace.go:171","msg":"trace[104543842] range","detail":"{range_begin:/registry/minions/multinode-548379-m03; range_end:; response_count:1; response_revision:661; }","duration":"134.435073ms","start":"2024-08-19T19:38:22.154254Z","end":"2024-08-19T19:38:22.288690Z","steps":["trace[104543842] 'agreement among raft nodes before linearized reading'  (duration: 134.387233ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T19:41:23.592020Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-19T19:41:23.592083Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-548379","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.35:2380"],"advertise-client-urls":["https://192.168.39.35:2379"]}
	{"level":"warn","ts":"2024-08-19T19:41:23.592197Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T19:41:23.592285Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T19:41:23.633312Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.35:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T19:41:23.633426Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.35:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T19:41:23.633598Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"732232f81d76e930","current-leader-member-id":"732232f81d76e930"}
	{"level":"info","ts":"2024-08-19T19:41:23.636455Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.35:2380"}
	{"level":"info","ts":"2024-08-19T19:41:23.636624Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.35:2380"}
	{"level":"info","ts":"2024-08-19T19:41:23.636677Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-548379","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.35:2380"],"advertise-client-urls":["https://192.168.39.35:2379"]}
	
	
	==> kernel <==
	 19:44:42 up 8 min,  0 users,  load average: 0.13, 0.17, 0.10
	Linux multinode-548379 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3947c41c8021a850328a6ef431f4aa7c273b46da629f27172dd9594e95a309e8] <==
	I0819 19:40:40.350168       1 main.go:322] Node multinode-548379-m03 has CIDR [10.244.3.0/24] 
	I0819 19:40:50.350444       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0819 19:40:50.350570       1 main.go:299] handling current node
	I0819 19:40:50.350612       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0819 19:40:50.350638       1 main.go:322] Node multinode-548379-m02 has CIDR [10.244.1.0/24] 
	I0819 19:40:50.350863       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0819 19:40:50.350904       1 main.go:322] Node multinode-548379-m03 has CIDR [10.244.3.0/24] 
	I0819 19:41:00.353807       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0819 19:41:00.353841       1 main.go:299] handling current node
	I0819 19:41:00.353858       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0819 19:41:00.353862       1 main.go:322] Node multinode-548379-m02 has CIDR [10.244.1.0/24] 
	I0819 19:41:00.353983       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0819 19:41:00.354006       1 main.go:322] Node multinode-548379-m03 has CIDR [10.244.3.0/24] 
	I0819 19:41:10.358926       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0819 19:41:10.359027       1 main.go:299] handling current node
	I0819 19:41:10.359056       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0819 19:41:10.359074       1 main.go:322] Node multinode-548379-m02 has CIDR [10.244.1.0/24] 
	I0819 19:41:10.359262       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0819 19:41:10.359290       1 main.go:322] Node multinode-548379-m03 has CIDR [10.244.3.0/24] 
	I0819 19:41:20.357485       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0819 19:41:20.357582       1 main.go:299] handling current node
	I0819 19:41:20.357612       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0819 19:41:20.357629       1 main.go:322] Node multinode-548379-m02 has CIDR [10.244.1.0/24] 
	I0819 19:41:20.357772       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0819 19:41:20.357807       1 main.go:322] Node multinode-548379-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [e5526087b87c13c5800d983aaa9ee7dba1126cc9d45ae7a4da130df1907428b5] <==
	I0819 19:43:55.849041       1 main.go:322] Node multinode-548379-m03 has CIDR [10.244.3.0/24] 
	I0819 19:44:05.849170       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0819 19:44:05.849270       1 main.go:299] handling current node
	I0819 19:44:05.849299       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0819 19:44:05.849317       1 main.go:322] Node multinode-548379-m02 has CIDR [10.244.1.0/24] 
	I0819 19:44:05.849476       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0819 19:44:05.849498       1 main.go:322] Node multinode-548379-m03 has CIDR [10.244.3.0/24] 
	I0819 19:44:15.849675       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0819 19:44:15.849781       1 main.go:322] Node multinode-548379-m02 has CIDR [10.244.1.0/24] 
	I0819 19:44:15.849936       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0819 19:44:15.849959       1 main.go:322] Node multinode-548379-m03 has CIDR [10.244.3.0/24] 
	I0819 19:44:15.850016       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0819 19:44:15.850035       1 main.go:299] handling current node
	I0819 19:44:25.848790       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0819 19:44:25.848831       1 main.go:299] handling current node
	I0819 19:44:25.848846       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0819 19:44:25.848853       1 main.go:322] Node multinode-548379-m02 has CIDR [10.244.1.0/24] 
	I0819 19:44:25.848989       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0819 19:44:25.849011       1 main.go:322] Node multinode-548379-m03 has CIDR [10.244.2.0/24] 
	I0819 19:44:35.848579       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0819 19:44:35.848700       1 main.go:299] handling current node
	I0819 19:44:35.848749       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0819 19:44:35.848772       1 main.go:322] Node multinode-548379-m02 has CIDR [10.244.1.0/24] 
	I0819 19:44:35.848918       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0819 19:44:35.848949       1 main.go:322] Node multinode-548379-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [0c3794f7593119cdb9a488c4223a840b2b1089549a68b2866400eac7c25f9ec4] <==
	W0819 19:41:23.610683       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.610716       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.610747       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.610801       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.610826       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.610860       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.610892       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.610925       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.610951       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.611009       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.611037       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.611063       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.616604       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.617639       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.617776       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.617841       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.617883       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.617920       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.617936       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.617954       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.617990       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.618030       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.618067       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.618103       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.621308       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [15c9bcc4dcef7cee9bcb1ab6f982a11667ff9fb0c9c2cd7290e1c8fa52eac90e] <==
	I0819 19:43:02.738384       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 19:43:02.738697       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 19:43:02.740618       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 19:43:02.741011       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 19:43:02.741054       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 19:43:02.755663       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 19:43:02.761380       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 19:43:02.761457       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 19:43:02.762349       1 aggregator.go:171] initial CRD sync complete...
	I0819 19:43:02.762379       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 19:43:02.762386       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 19:43:02.762391       1 cache.go:39] Caches are synced for autoregister controller
	I0819 19:43:02.762542       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 19:43:02.765408       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 19:43:02.765448       1 policy_source.go:224] refreshing policies
	I0819 19:43:02.771648       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0819 19:43:02.803783       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0819 19:43:03.640262       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 19:43:04.355761       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 19:43:04.675891       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 19:43:04.700891       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 19:43:04.893200       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 19:43:04.935784       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 19:43:06.411476       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 19:43:06.460320       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [24eff3c1f0c13300bdb7581b9cafdc8c7c9bea7e77c52fc6491f4e3055c7eea1] <==
	I0819 19:38:59.997414       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-548379-m02"
	I0819 19:39:00.023324       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-548379-m03" podCIDRs=["10.244.3.0/24"]
	I0819 19:39:00.023459       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	E0819 19:39:00.038316       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-548379-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-548379-m03" podCIDRs=["10.244.4.0/24"]
	E0819 19:39:00.038515       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-548379-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-548379-m03"
	E0819 19:39:00.038876       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-548379-m03': failed to patch node CIDR: Node \"multinode-548379-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0819 19:39:00.039046       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:39:00.045005       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:39:00.236992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:39:00.581254       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:39:03.263683       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:39:10.075580       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:39:18.019632       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-548379-m02"
	I0819 19:39:18.019702       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:39:18.027950       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:39:18.170037       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:40:03.187918       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:40:03.188253       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-548379-m02"
	I0819 19:40:03.194472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m02"
	I0819 19:40:03.217088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:40:03.226030       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m02"
	I0819 19:40:03.260699       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.128656ms"
	I0819 19:40:03.261082       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="87.887µs"
	I0819 19:40:08.371745       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:40:18.468675       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m02"
	
	
	==> kube-controller-manager [d520dc4f2fb7ed6e16b3b726559b7e9fdb6e4422b957094032e868df08609cc5] <==
	I0819 19:44:04.387446       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.603766ms"
	I0819 19:44:04.387551       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.927µs"
	I0819 19:44:06.081785       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m02"
	I0819 19:44:14.797603       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m02"
	I0819 19:44:20.097707       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:20.113848       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:20.340385       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:20.340910       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-548379-m02"
	I0819 19:44:21.317243       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-548379-m02"
	I0819 19:44:21.317601       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-548379-m03\" does not exist"
	I0819 19:44:21.335307       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-548379-m03" podCIDRs=["10.244.2.0/24"]
	I0819 19:44:21.335344       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	E0819 19:44:21.349822       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-548379-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-548379-m03" podCIDRs=["10.244.3.0/24"]
	E0819 19:44:21.349872       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-548379-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-548379-m03"
	E0819 19:44:21.349915       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-548379-m03': failed to patch node CIDR: Node \"multinode-548379-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0819 19:44:21.349942       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:21.355620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:21.364960       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:21.712964       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:26.196770       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:31.532904       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:39.192507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:39.192675       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-548379-m02"
	I0819 19:44:39.204225       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:41.104256       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	
	
	==> kube-proxy [8cde9d50116f223ef6493538f5ae20918c477628d4127a2ac98182f64446395b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 19:36:36.324419       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 19:36:36.333056       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.35"]
	E0819 19:36:36.333161       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 19:36:36.387601       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 19:36:36.387634       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 19:36:36.387665       1 server_linux.go:169] "Using iptables Proxier"
	I0819 19:36:36.390225       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 19:36:36.390524       1 server.go:483] "Version info" version="v1.31.0"
	I0819 19:36:36.390535       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:36:36.391730       1 config.go:197] "Starting service config controller"
	I0819 19:36:36.391750       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 19:36:36.391781       1 config.go:104] "Starting endpoint slice config controller"
	I0819 19:36:36.391786       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 19:36:36.392478       1 config.go:326] "Starting node config controller"
	I0819 19:36:36.392526       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 19:36:36.492837       1 shared_informer.go:320] Caches are synced for node config
	I0819 19:36:36.492870       1 shared_informer.go:320] Caches are synced for service config
	I0819 19:36:36.492891       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ae957bd572857eca22ae7721f8b447ffa030532411fe7768a67d7e9fba77d34f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 19:43:05.282342       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 19:43:05.299548       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.35"]
	E0819 19:43:05.299860       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 19:43:05.374277       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 19:43:05.375200       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 19:43:05.375296       1 server_linux.go:169] "Using iptables Proxier"
	I0819 19:43:05.379518       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 19:43:05.379780       1 server.go:483] "Version info" version="v1.31.0"
	I0819 19:43:05.379966       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:43:05.381719       1 config.go:197] "Starting service config controller"
	I0819 19:43:05.383880       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 19:43:05.382902       1 config.go:104] "Starting endpoint slice config controller"
	I0819 19:43:05.385238       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 19:43:05.383415       1 config.go:326] "Starting node config controller"
	I0819 19:43:05.385275       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 19:43:05.486100       1 shared_informer.go:320] Caches are synced for node config
	I0819 19:43:05.486211       1 shared_informer.go:320] Caches are synced for service config
	I0819 19:43:05.486222       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9d9ef8b8013cdc17033d16419fd14ec8e27d80e9deedb824c413d93e9da4fb06] <==
	I0819 19:43:01.331795       1 serving.go:386] Generated self-signed cert in-memory
	W0819 19:43:02.679636       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 19:43:02.679675       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 19:43:02.679687       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 19:43:02.679693       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 19:43:02.763039       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 19:43:02.763075       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:43:02.768460       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 19:43:02.768522       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 19:43:02.769036       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 19:43:02.769096       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 19:43:02.868955       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e83dd57bfe6d4f267dc1e80968e066a1107866708e1ed336dd94dde65dc97994] <==
	E0819 19:36:27.224902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.301041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 19:36:27.301099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.357308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 19:36:27.357357       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.406339       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 19:36:27.406386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.441394       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 19:36:27.441441       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.485460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 19:36:27.485765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.488473       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 19:36:27.489155       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.551043       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 19:36:27.551246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.612400       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 19:36:27.612676       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.712588       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 19:36:27.712743       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.722878       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 19:36:27.722997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.877783       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 19:36:27.877917       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 19:36:29.487473       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 19:41:23.592549       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 19 19:43:09 multinode-548379 kubelet[2942]: E0819 19:43:09.372544    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096589371303947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:43:09 multinode-548379 kubelet[2942]: E0819 19:43:09.372567    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096589371303947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:43:19 multinode-548379 kubelet[2942]: E0819 19:43:19.374182    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096599373723096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:43:19 multinode-548379 kubelet[2942]: E0819 19:43:19.374491    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096599373723096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:43:29 multinode-548379 kubelet[2942]: E0819 19:43:29.376006    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096609375682972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:43:29 multinode-548379 kubelet[2942]: E0819 19:43:29.376373    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096609375682972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:43:39 multinode-548379 kubelet[2942]: E0819 19:43:39.378820    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096619378410890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:43:39 multinode-548379 kubelet[2942]: E0819 19:43:39.378849    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096619378410890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:43:49 multinode-548379 kubelet[2942]: E0819 19:43:49.380682    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096629379942487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:43:49 multinode-548379 kubelet[2942]: E0819 19:43:49.381237    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096629379942487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:43:59 multinode-548379 kubelet[2942]: E0819 19:43:59.383279    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096639382759942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:43:59 multinode-548379 kubelet[2942]: E0819 19:43:59.383559    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096639382759942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:43:59 multinode-548379 kubelet[2942]: E0819 19:43:59.404295    2942 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 19:43:59 multinode-548379 kubelet[2942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 19:43:59 multinode-548379 kubelet[2942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 19:43:59 multinode-548379 kubelet[2942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 19:43:59 multinode-548379 kubelet[2942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:44:09 multinode-548379 kubelet[2942]: E0819 19:44:09.386280    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096649385699712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:44:09 multinode-548379 kubelet[2942]: E0819 19:44:09.386323    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096649385699712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:44:19 multinode-548379 kubelet[2942]: E0819 19:44:19.388317    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096659387804900,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:44:19 multinode-548379 kubelet[2942]: E0819 19:44:19.388368    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096659387804900,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:44:29 multinode-548379 kubelet[2942]: E0819 19:44:29.390527    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096669390222893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:44:29 multinode-548379 kubelet[2942]: E0819 19:44:29.390564    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096669390222893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:44:39 multinode-548379 kubelet[2942]: E0819 19:44:39.392634    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096679392174781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:44:39 multinode-548379 kubelet[2942]: E0819 19:44:39.392910    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096679392174781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:44:41.667424  472109 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-430949/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-548379 -n multinode-548379
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-548379 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (322.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 stop
E0819 19:44:56.398917  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-548379 stop: exit status 82 (2m0.489570356s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-548379-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-548379 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-548379 status: exit status 3 (18.778854262s)

                                                
                                                
-- stdout --
	multinode-548379
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-548379-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:47:05.029533  472770 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.133:22: connect: no route to host
	E0819 19:47:05.029572  472770 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.133:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-548379 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-548379 -n multinode-548379
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-548379 logs -n 25: (1.42577969s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-548379 ssh -n                                                                 | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-548379 cp multinode-548379-m02:/home/docker/cp-test.txt                       | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379:/home/docker/cp-test_multinode-548379-m02_multinode-548379.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n                                                                 | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n multinode-548379 sudo cat                                       | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | /home/docker/cp-test_multinode-548379-m02_multinode-548379.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-548379 cp multinode-548379-m02:/home/docker/cp-test.txt                       | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m03:/home/docker/cp-test_multinode-548379-m02_multinode-548379-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n                                                                 | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n multinode-548379-m03 sudo cat                                   | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | /home/docker/cp-test_multinode-548379-m02_multinode-548379-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-548379 cp testdata/cp-test.txt                                                | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n                                                                 | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-548379 cp multinode-548379-m03:/home/docker/cp-test.txt                       | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3755015912/001/cp-test_multinode-548379-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n                                                                 | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-548379 cp multinode-548379-m03:/home/docker/cp-test.txt                       | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379:/home/docker/cp-test_multinode-548379-m03_multinode-548379.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n                                                                 | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n multinode-548379 sudo cat                                       | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | /home/docker/cp-test_multinode-548379-m03_multinode-548379.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-548379 cp multinode-548379-m03:/home/docker/cp-test.txt                       | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m02:/home/docker/cp-test_multinode-548379-m03_multinode-548379-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n                                                                 | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n multinode-548379-m02 sudo cat                                   | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | /home/docker/cp-test_multinode-548379-m03_multinode-548379-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-548379 node stop m03                                                          | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	| node    | multinode-548379 node start                                                             | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:39 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-548379                                                                | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:39 UTC |                     |
	| stop    | -p multinode-548379                                                                     | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:39 UTC |                     |
	| start   | -p multinode-548379                                                                     | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:41 UTC | 19 Aug 24 19:44 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-548379                                                                | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:44 UTC |                     |
	| node    | multinode-548379 node delete                                                            | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:44 UTC | 19 Aug 24 19:44 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-548379 stop                                                                   | multinode-548379 | jenkins | v1.33.1 | 19 Aug 24 19:44 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:41:22
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:41:22.590573  470984 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:41:22.590838  470984 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:41:22.590848  470984 out.go:358] Setting ErrFile to fd 2...
	I0819 19:41:22.590853  470984 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:41:22.591067  470984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:41:22.591638  470984 out.go:352] Setting JSON to false
	I0819 19:41:22.592662  470984 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12234,"bootTime":1724084249,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:41:22.592732  470984 start.go:139] virtualization: kvm guest
	I0819 19:41:22.595122  470984 out.go:177] * [multinode-548379] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:41:22.596694  470984 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 19:41:22.596753  470984 notify.go:220] Checking for updates...
	I0819 19:41:22.599198  470984 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:41:22.600569  470984 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:41:22.601879  470984 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:41:22.603198  470984 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:41:22.604437  470984 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:41:22.606194  470984 config.go:182] Loaded profile config "multinode-548379": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:41:22.606306  470984 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 19:41:22.606830  470984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:41:22.606921  470984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:41:22.623376  470984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34597
	I0819 19:41:22.623974  470984 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:41:22.624656  470984 main.go:141] libmachine: Using API Version  1
	I0819 19:41:22.624678  470984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:41:22.625048  470984 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:41:22.625339  470984 main.go:141] libmachine: (multinode-548379) Calling .DriverName
	I0819 19:41:22.668345  470984 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 19:41:22.669596  470984 start.go:297] selected driver: kvm2
	I0819 19:41:22.669626  470984 start.go:901] validating driver "kvm2" against &{Name:multinode-548379 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-548379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.197 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:41:22.669813  470984 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:41:22.670154  470984 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:41:22.670236  470984 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:41:22.686549  470984 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:41:22.687415  470984 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:41:22.687474  470984 cni.go:84] Creating CNI manager for ""
	I0819 19:41:22.687480  470984 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 19:41:22.687531  470984 start.go:340] cluster config:
	{Name:multinode-548379 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-548379 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.197 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:41:22.687662  470984 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:41:22.689677  470984 out.go:177] * Starting "multinode-548379" primary control-plane node in "multinode-548379" cluster
	I0819 19:41:22.690905  470984 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:41:22.690953  470984 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:41:22.690962  470984 cache.go:56] Caching tarball of preloaded images
	I0819 19:41:22.691062  470984 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:41:22.691072  470984 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 19:41:22.691184  470984 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/config.json ...
	I0819 19:41:22.691412  470984 start.go:360] acquireMachinesLock for multinode-548379: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:41:22.691460  470984 start.go:364] duration metric: took 26.32µs to acquireMachinesLock for "multinode-548379"
	I0819 19:41:22.691477  470984 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:41:22.691483  470984 fix.go:54] fixHost starting: 
	I0819 19:41:22.691797  470984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:41:22.691835  470984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:41:22.708034  470984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44695
	I0819 19:41:22.708545  470984 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:41:22.709095  470984 main.go:141] libmachine: Using API Version  1
	I0819 19:41:22.709119  470984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:41:22.709483  470984 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:41:22.709665  470984 main.go:141] libmachine: (multinode-548379) Calling .DriverName
	I0819 19:41:22.709840  470984 main.go:141] libmachine: (multinode-548379) Calling .GetState
	I0819 19:41:22.711562  470984 fix.go:112] recreateIfNeeded on multinode-548379: state=Running err=<nil>
	W0819 19:41:22.711591  470984 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:41:22.713737  470984 out.go:177] * Updating the running kvm2 "multinode-548379" VM ...
	I0819 19:41:22.715026  470984 machine.go:93] provisionDockerMachine start ...
	I0819 19:41:22.715049  470984 main.go:141] libmachine: (multinode-548379) Calling .DriverName
	I0819 19:41:22.715343  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:41:22.718200  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:22.718663  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:41:22.718686  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:22.718860  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHPort
	I0819 19:41:22.719105  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:41:22.719266  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:41:22.719411  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHUsername
	I0819 19:41:22.719568  470984 main.go:141] libmachine: Using SSH client type: native
	I0819 19:41:22.719772  470984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0819 19:41:22.719784  470984 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:41:22.833515  470984 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-548379
	
	I0819 19:41:22.833543  470984 main.go:141] libmachine: (multinode-548379) Calling .GetMachineName
	I0819 19:41:22.833831  470984 buildroot.go:166] provisioning hostname "multinode-548379"
	I0819 19:41:22.833865  470984 main.go:141] libmachine: (multinode-548379) Calling .GetMachineName
	I0819 19:41:22.834120  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:41:22.836839  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:22.837231  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:41:22.837254  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:22.837406  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHPort
	I0819 19:41:22.837607  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:41:22.837811  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:41:22.838026  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHUsername
	I0819 19:41:22.838296  470984 main.go:141] libmachine: Using SSH client type: native
	I0819 19:41:22.838510  470984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0819 19:41:22.838527  470984 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-548379 && echo "multinode-548379" | sudo tee /etc/hostname
	I0819 19:41:22.966486  470984 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-548379
	
	I0819 19:41:22.966563  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:41:22.969563  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:22.970008  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:41:22.970041  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:22.970312  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHPort
	I0819 19:41:22.970550  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:41:22.970730  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:41:22.970906  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHUsername
	I0819 19:41:22.971077  470984 main.go:141] libmachine: Using SSH client type: native
	I0819 19:41:22.971297  470984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0819 19:41:22.971315  470984 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-548379' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-548379/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-548379' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:41:23.090060  470984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:41:23.090089  470984 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 19:41:23.090127  470984 buildroot.go:174] setting up certificates
	I0819 19:41:23.090136  470984 provision.go:84] configureAuth start
	I0819 19:41:23.090146  470984 main.go:141] libmachine: (multinode-548379) Calling .GetMachineName
	I0819 19:41:23.090463  470984 main.go:141] libmachine: (multinode-548379) Calling .GetIP
	I0819 19:41:23.093075  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:23.093503  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:41:23.093537  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:23.093753  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:41:23.096249  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:23.096593  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:41:23.096619  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:23.096785  470984 provision.go:143] copyHostCerts
	I0819 19:41:23.096819  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:41:23.096852  470984 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 19:41:23.096872  470984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:41:23.096939  470984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 19:41:23.097017  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:41:23.097036  470984 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 19:41:23.097046  470984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:41:23.097082  470984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 19:41:23.097175  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:41:23.097201  470984 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 19:41:23.097210  470984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:41:23.097237  470984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 19:41:23.097293  470984 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.multinode-548379 san=[127.0.0.1 192.168.39.35 localhost minikube multinode-548379]
	I0819 19:41:23.310230  470984 provision.go:177] copyRemoteCerts
	I0819 19:41:23.310295  470984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:41:23.310322  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:41:23.313449  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:23.313855  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:41:23.313888  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:23.314090  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHPort
	I0819 19:41:23.314278  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:41:23.314433  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHUsername
	I0819 19:41:23.314617  470984 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/multinode-548379/id_rsa Username:docker}
	I0819 19:41:23.404288  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 19:41:23.404374  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:41:23.429517  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 19:41:23.429621  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0819 19:41:23.453725  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 19:41:23.453812  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 19:41:23.478690  470984 provision.go:87] duration metric: took 388.538189ms to configureAuth
	I0819 19:41:23.478727  470984 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:41:23.478956  470984 config.go:182] Loaded profile config "multinode-548379": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:41:23.479037  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:41:23.481940  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:23.482282  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:41:23.482315  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:41:23.482488  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHPort
	I0819 19:41:23.482699  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:41:23.482907  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:41:23.483029  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHUsername
	I0819 19:41:23.483207  470984 main.go:141] libmachine: Using SSH client type: native
	I0819 19:41:23.483374  470984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0819 19:41:23.483391  470984 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:42:54.405518  470984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:42:54.405611  470984 machine.go:96] duration metric: took 1m31.690557871s to provisionDockerMachine
	I0819 19:42:54.405630  470984 start.go:293] postStartSetup for "multinode-548379" (driver="kvm2")
	I0819 19:42:54.405664  470984 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:42:54.405711  470984 main.go:141] libmachine: (multinode-548379) Calling .DriverName
	I0819 19:42:54.406100  470984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:42:54.406135  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:42:54.409906  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.410334  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:42:54.410368  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.410559  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHPort
	I0819 19:42:54.410808  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:42:54.410995  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHUsername
	I0819 19:42:54.411194  470984 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/multinode-548379/id_rsa Username:docker}
	I0819 19:42:54.500522  470984 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:42:54.505031  470984 command_runner.go:130] > NAME=Buildroot
	I0819 19:42:54.505061  470984 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 19:42:54.505066  470984 command_runner.go:130] > ID=buildroot
	I0819 19:42:54.505071  470984 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 19:42:54.505077  470984 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 19:42:54.505119  470984 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:42:54.505154  470984 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 19:42:54.505248  470984 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 19:42:54.505348  470984 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 19:42:54.505361  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /etc/ssl/certs/4381592.pem
	I0819 19:42:54.505451  470984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:42:54.517458  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:42:54.542052  470984 start.go:296] duration metric: took 136.406058ms for postStartSetup
	I0819 19:42:54.542105  470984 fix.go:56] duration metric: took 1m31.850621637s for fixHost
	I0819 19:42:54.542133  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:42:54.544873  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.545246  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:42:54.545275  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.545534  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHPort
	I0819 19:42:54.545781  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:42:54.545964  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:42:54.546102  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHUsername
	I0819 19:42:54.546284  470984 main.go:141] libmachine: Using SSH client type: native
	I0819 19:42:54.546449  470984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0819 19:42:54.546464  470984 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:42:54.662147  470984 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724096574.625059009
	
	I0819 19:42:54.662180  470984 fix.go:216] guest clock: 1724096574.625059009
	I0819 19:42:54.662188  470984 fix.go:229] Guest: 2024-08-19 19:42:54.625059009 +0000 UTC Remote: 2024-08-19 19:42:54.542111305 +0000 UTC m=+91.990512322 (delta=82.947704ms)
	I0819 19:42:54.662211  470984 fix.go:200] guest clock delta is within tolerance: 82.947704ms
	I0819 19:42:54.662218  470984 start.go:83] releasing machines lock for "multinode-548379", held for 1m31.970746689s
	I0819 19:42:54.662242  470984 main.go:141] libmachine: (multinode-548379) Calling .DriverName
	I0819 19:42:54.662518  470984 main.go:141] libmachine: (multinode-548379) Calling .GetIP
	I0819 19:42:54.665051  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.665508  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:42:54.665533  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.665712  470984 main.go:141] libmachine: (multinode-548379) Calling .DriverName
	I0819 19:42:54.666224  470984 main.go:141] libmachine: (multinode-548379) Calling .DriverName
	I0819 19:42:54.666412  470984 main.go:141] libmachine: (multinode-548379) Calling .DriverName
	I0819 19:42:54.666511  470984 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:42:54.666568  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:42:54.666634  470984 ssh_runner.go:195] Run: cat /version.json
	I0819 19:42:54.666660  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:42:54.669268  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.669299  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.669838  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:42:54.669871  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.669896  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:42:54.669916  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:54.670081  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHPort
	I0819 19:42:54.670154  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHPort
	I0819 19:42:54.670299  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:42:54.670302  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:42:54.670480  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHUsername
	I0819 19:42:54.670488  470984 main.go:141] libmachine: (multinode-548379) Calling .GetSSHUsername
	I0819 19:42:54.670664  470984 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/multinode-548379/id_rsa Username:docker}
	I0819 19:42:54.670670  470984 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/multinode-548379/id_rsa Username:docker}
	I0819 19:42:54.772527  470984 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 19:42:54.772595  470984 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 19:42:54.772792  470984 ssh_runner.go:195] Run: systemctl --version
	I0819 19:42:54.778783  470984 command_runner.go:130] > systemd 252 (252)
	I0819 19:42:54.778847  470984 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 19:42:54.778980  470984 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:42:54.937197  470984 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 19:42:54.942902  470984 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 19:42:54.943015  470984 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:42:54.943081  470984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:42:54.953684  470984 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 19:42:54.953721  470984 start.go:495] detecting cgroup driver to use...
	I0819 19:42:54.953803  470984 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:42:54.973245  470984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:42:54.988294  470984 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:42:54.988366  470984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:42:55.002771  470984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:42:55.017059  470984 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:42:55.182200  470984 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:42:55.340634  470984 docker.go:233] disabling docker service ...
	I0819 19:42:55.340724  470984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:42:55.361379  470984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:42:55.376425  470984 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:42:55.530685  470984 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:42:55.670682  470984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:42:55.684640  470984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:42:55.703722  470984 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0819 19:42:55.703780  470984 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:42:55.703847  470984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:42:55.714592  470984 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:42:55.714681  470984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:42:55.725608  470984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:42:55.736674  470984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:42:55.747419  470984 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:42:55.758557  470984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:42:55.769367  470984 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:42:55.781024  470984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:42:55.791844  470984 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:42:55.801465  470984 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 19:42:55.801559  470984 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:42:55.811382  470984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:42:55.952799  470984 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:42:56.202800  470984 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:42:56.202881  470984 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:42:56.213515  470984 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0819 19:42:56.213546  470984 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0819 19:42:56.213553  470984 command_runner.go:130] > Device: 0,22	Inode: 1341        Links: 1
	I0819 19:42:56.213560  470984 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 19:42:56.213565  470984 command_runner.go:130] > Access: 2024-08-19 19:42:56.072560021 +0000
	I0819 19:42:56.213572  470984 command_runner.go:130] > Modify: 2024-08-19 19:42:56.055559563 +0000
	I0819 19:42:56.213578  470984 command_runner.go:130] > Change: 2024-08-19 19:42:56.055559563 +0000
	I0819 19:42:56.213584  470984 command_runner.go:130] >  Birth: -
	I0819 19:42:56.213758  470984 start.go:563] Will wait 60s for crictl version
	I0819 19:42:56.213833  470984 ssh_runner.go:195] Run: which crictl
	I0819 19:42:56.217708  470984 command_runner.go:130] > /usr/bin/crictl
	I0819 19:42:56.217868  470984 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:42:56.250994  470984 command_runner.go:130] > Version:  0.1.0
	I0819 19:42:56.251021  470984 command_runner.go:130] > RuntimeName:  cri-o
	I0819 19:42:56.251027  470984 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0819 19:42:56.251035  470984 command_runner.go:130] > RuntimeApiVersion:  v1
	I0819 19:42:56.251058  470984 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:42:56.251125  470984 ssh_runner.go:195] Run: crio --version
	I0819 19:42:56.281149  470984 command_runner.go:130] > crio version 1.29.1
	I0819 19:42:56.281181  470984 command_runner.go:130] > Version:        1.29.1
	I0819 19:42:56.281189  470984 command_runner.go:130] > GitCommit:      unknown
	I0819 19:42:56.281195  470984 command_runner.go:130] > GitCommitDate:  unknown
	I0819 19:42:56.281199  470984 command_runner.go:130] > GitTreeState:   clean
	I0819 19:42:56.281207  470984 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 19:42:56.281212  470984 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 19:42:56.281216  470984 command_runner.go:130] > Compiler:       gc
	I0819 19:42:56.281221  470984 command_runner.go:130] > Platform:       linux/amd64
	I0819 19:42:56.281225  470984 command_runner.go:130] > Linkmode:       dynamic
	I0819 19:42:56.281229  470984 command_runner.go:130] > BuildTags:      
	I0819 19:42:56.281235  470984 command_runner.go:130] >   containers_image_ostree_stub
	I0819 19:42:56.281242  470984 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 19:42:56.281248  470984 command_runner.go:130] >   btrfs_noversion
	I0819 19:42:56.281257  470984 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 19:42:56.281264  470984 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 19:42:56.281273  470984 command_runner.go:130] >   seccomp
	I0819 19:42:56.281279  470984 command_runner.go:130] > LDFlags:          unknown
	I0819 19:42:56.281284  470984 command_runner.go:130] > SeccompEnabled:   true
	I0819 19:42:56.281289  470984 command_runner.go:130] > AppArmorEnabled:  false
	I0819 19:42:56.281379  470984 ssh_runner.go:195] Run: crio --version
	I0819 19:42:56.311464  470984 command_runner.go:130] > crio version 1.29.1
	I0819 19:42:56.311490  470984 command_runner.go:130] > Version:        1.29.1
	I0819 19:42:56.311498  470984 command_runner.go:130] > GitCommit:      unknown
	I0819 19:42:56.311505  470984 command_runner.go:130] > GitCommitDate:  unknown
	I0819 19:42:56.311511  470984 command_runner.go:130] > GitTreeState:   clean
	I0819 19:42:56.311518  470984 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 19:42:56.311524  470984 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 19:42:56.311529  470984 command_runner.go:130] > Compiler:       gc
	I0819 19:42:56.311535  470984 command_runner.go:130] > Platform:       linux/amd64
	I0819 19:42:56.311541  470984 command_runner.go:130] > Linkmode:       dynamic
	I0819 19:42:56.311547  470984 command_runner.go:130] > BuildTags:      
	I0819 19:42:56.311560  470984 command_runner.go:130] >   containers_image_ostree_stub
	I0819 19:42:56.311567  470984 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 19:42:56.311572  470984 command_runner.go:130] >   btrfs_noversion
	I0819 19:42:56.311580  470984 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 19:42:56.311627  470984 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 19:42:56.311659  470984 command_runner.go:130] >   seccomp
	I0819 19:42:56.311666  470984 command_runner.go:130] > LDFlags:          unknown
	I0819 19:42:56.311672  470984 command_runner.go:130] > SeccompEnabled:   true
	I0819 19:42:56.311678  470984 command_runner.go:130] > AppArmorEnabled:  false
	I0819 19:42:56.314926  470984 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:42:56.316383  470984 main.go:141] libmachine: (multinode-548379) Calling .GetIP
	I0819 19:42:56.319200  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:56.319602  470984 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:42:56.319635  470984 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:42:56.319795  470984 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 19:42:56.324459  470984 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0819 19:42:56.324646  470984 kubeadm.go:883] updating cluster {Name:multinode-548379 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-548379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.197 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:42:56.324805  470984 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:42:56.324865  470984 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:42:56.367605  470984 command_runner.go:130] > {
	I0819 19:42:56.367632  470984 command_runner.go:130] >   "images": [
	I0819 19:42:56.367636  470984 command_runner.go:130] >     {
	I0819 19:42:56.367645  470984 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 19:42:56.367649  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.367655  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 19:42:56.367659  470984 command_runner.go:130] >       ],
	I0819 19:42:56.367663  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.367678  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 19:42:56.367686  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 19:42:56.367689  470984 command_runner.go:130] >       ],
	I0819 19:42:56.367693  470984 command_runner.go:130] >       "size": "87165492",
	I0819 19:42:56.367697  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.367701  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.367707  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.367713  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.367717  470984 command_runner.go:130] >     },
	I0819 19:42:56.367720  470984 command_runner.go:130] >     {
	I0819 19:42:56.367726  470984 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 19:42:56.367730  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.367735  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 19:42:56.367743  470984 command_runner.go:130] >       ],
	I0819 19:42:56.367748  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.367757  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 19:42:56.367768  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 19:42:56.367773  470984 command_runner.go:130] >       ],
	I0819 19:42:56.367783  470984 command_runner.go:130] >       "size": "87190579",
	I0819 19:42:56.367791  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.367801  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.367810  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.367818  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.367823  470984 command_runner.go:130] >     },
	I0819 19:42:56.367830  470984 command_runner.go:130] >     {
	I0819 19:42:56.367838  470984 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 19:42:56.367847  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.367855  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 19:42:56.367864  470984 command_runner.go:130] >       ],
	I0819 19:42:56.367873  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.367885  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 19:42:56.367898  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 19:42:56.367906  470984 command_runner.go:130] >       ],
	I0819 19:42:56.367916  470984 command_runner.go:130] >       "size": "1363676",
	I0819 19:42:56.367926  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.367935  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.367944  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.367953  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.367961  470984 command_runner.go:130] >     },
	I0819 19:42:56.367966  470984 command_runner.go:130] >     {
	I0819 19:42:56.367980  470984 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 19:42:56.367986  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.367992  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 19:42:56.367997  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368002  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.368011  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 19:42:56.368024  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 19:42:56.368030  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368035  470984 command_runner.go:130] >       "size": "31470524",
	I0819 19:42:56.368041  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.368046  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.368052  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.368057  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.368063  470984 command_runner.go:130] >     },
	I0819 19:42:56.368096  470984 command_runner.go:130] >     {
	I0819 19:42:56.368111  470984 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 19:42:56.368115  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.368121  470984 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 19:42:56.368127  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368131  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.368140  470984 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 19:42:56.368148  470984 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 19:42:56.368154  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368159  470984 command_runner.go:130] >       "size": "61245718",
	I0819 19:42:56.368165  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.368170  470984 command_runner.go:130] >       "username": "nonroot",
	I0819 19:42:56.368176  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.368180  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.368186  470984 command_runner.go:130] >     },
	I0819 19:42:56.368190  470984 command_runner.go:130] >     {
	I0819 19:42:56.368198  470984 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 19:42:56.368204  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.368209  470984 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 19:42:56.368215  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368219  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.368228  470984 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 19:42:56.368235  470984 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 19:42:56.368241  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368246  470984 command_runner.go:130] >       "size": "149009664",
	I0819 19:42:56.368252  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.368256  470984 command_runner.go:130] >         "value": "0"
	I0819 19:42:56.368261  470984 command_runner.go:130] >       },
	I0819 19:42:56.368266  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.368272  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.368276  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.368281  470984 command_runner.go:130] >     },
	I0819 19:42:56.368285  470984 command_runner.go:130] >     {
	I0819 19:42:56.368304  470984 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 19:42:56.368315  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.368320  470984 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 19:42:56.368326  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368330  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.368339  470984 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 19:42:56.368348  470984 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 19:42:56.368354  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368359  470984 command_runner.go:130] >       "size": "95233506",
	I0819 19:42:56.368364  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.368368  470984 command_runner.go:130] >         "value": "0"
	I0819 19:42:56.368374  470984 command_runner.go:130] >       },
	I0819 19:42:56.368378  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.368384  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.368388  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.368394  470984 command_runner.go:130] >     },
	I0819 19:42:56.368397  470984 command_runner.go:130] >     {
	I0819 19:42:56.368403  470984 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 19:42:56.368409  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.368415  470984 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 19:42:56.368421  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368426  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.368444  470984 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 19:42:56.368454  470984 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 19:42:56.368460  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368464  470984 command_runner.go:130] >       "size": "89437512",
	I0819 19:42:56.368470  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.368474  470984 command_runner.go:130] >         "value": "0"
	I0819 19:42:56.368480  470984 command_runner.go:130] >       },
	I0819 19:42:56.368483  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.368487  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.368491  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.368494  470984 command_runner.go:130] >     },
	I0819 19:42:56.368497  470984 command_runner.go:130] >     {
	I0819 19:42:56.368503  470984 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 19:42:56.368506  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.368511  470984 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 19:42:56.368515  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368518  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.368526  470984 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 19:42:56.368538  470984 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 19:42:56.368543  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368555  470984 command_runner.go:130] >       "size": "92728217",
	I0819 19:42:56.368558  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.368562  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.368566  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.368570  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.368574  470984 command_runner.go:130] >     },
	I0819 19:42:56.368577  470984 command_runner.go:130] >     {
	I0819 19:42:56.368583  470984 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 19:42:56.368592  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.368597  470984 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 19:42:56.368601  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368607  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.368618  470984 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 19:42:56.368628  470984 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 19:42:56.368634  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368641  470984 command_runner.go:130] >       "size": "68420936",
	I0819 19:42:56.368650  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.368657  470984 command_runner.go:130] >         "value": "0"
	I0819 19:42:56.368664  470984 command_runner.go:130] >       },
	I0819 19:42:56.368671  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.368680  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.368689  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.368698  470984 command_runner.go:130] >     },
	I0819 19:42:56.368705  470984 command_runner.go:130] >     {
	I0819 19:42:56.368711  470984 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 19:42:56.368718  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.368722  470984 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 19:42:56.368728  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368733  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.368742  470984 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 19:42:56.368751  470984 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 19:42:56.368758  470984 command_runner.go:130] >       ],
	I0819 19:42:56.368762  470984 command_runner.go:130] >       "size": "742080",
	I0819 19:42:56.368767  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.368771  470984 command_runner.go:130] >         "value": "65535"
	I0819 19:42:56.368775  470984 command_runner.go:130] >       },
	I0819 19:42:56.368785  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.368791  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.368795  470984 command_runner.go:130] >       "pinned": true
	I0819 19:42:56.368800  470984 command_runner.go:130] >     }
	I0819 19:42:56.368804  470984 command_runner.go:130] >   ]
	I0819 19:42:56.368810  470984 command_runner.go:130] > }
	I0819 19:42:56.369014  470984 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:42:56.369029  470984 crio.go:433] Images already preloaded, skipping extraction
	I0819 19:42:56.369080  470984 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:42:56.406444  470984 command_runner.go:130] > {
	I0819 19:42:56.406475  470984 command_runner.go:130] >   "images": [
	I0819 19:42:56.406482  470984 command_runner.go:130] >     {
	I0819 19:42:56.406495  470984 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 19:42:56.406501  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.406507  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 19:42:56.406511  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406515  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.406537  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 19:42:56.406547  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 19:42:56.406553  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406561  470984 command_runner.go:130] >       "size": "87165492",
	I0819 19:42:56.406568  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.406573  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.406581  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.406585  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.406594  470984 command_runner.go:130] >     },
	I0819 19:42:56.406600  470984 command_runner.go:130] >     {
	I0819 19:42:56.406612  470984 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 19:42:56.406621  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.406630  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 19:42:56.406641  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406648  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.406655  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 19:42:56.406662  470984 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 19:42:56.406668  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406673  470984 command_runner.go:130] >       "size": "87190579",
	I0819 19:42:56.406677  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.406683  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.406689  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.406693  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.406697  470984 command_runner.go:130] >     },
	I0819 19:42:56.406700  470984 command_runner.go:130] >     {
	I0819 19:42:56.406706  470984 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 19:42:56.406713  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.406717  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 19:42:56.406723  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406727  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.406735  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 19:42:56.406744  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 19:42:56.406747  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406752  470984 command_runner.go:130] >       "size": "1363676",
	I0819 19:42:56.406758  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.406762  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.406768  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.406772  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.406775  470984 command_runner.go:130] >     },
	I0819 19:42:56.406778  470984 command_runner.go:130] >     {
	I0819 19:42:56.406784  470984 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 19:42:56.406790  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.406795  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 19:42:56.406801  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406805  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.406814  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 19:42:56.406825  470984 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 19:42:56.406830  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406834  470984 command_runner.go:130] >       "size": "31470524",
	I0819 19:42:56.406839  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.406850  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.406855  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.406859  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.406863  470984 command_runner.go:130] >     },
	I0819 19:42:56.406868  470984 command_runner.go:130] >     {
	I0819 19:42:56.406874  470984 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 19:42:56.406878  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.406882  470984 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 19:42:56.406888  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406892  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.406901  470984 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 19:42:56.406907  470984 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 19:42:56.406913  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406917  470984 command_runner.go:130] >       "size": "61245718",
	I0819 19:42:56.406920  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.406925  470984 command_runner.go:130] >       "username": "nonroot",
	I0819 19:42:56.406928  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.406932  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.406936  470984 command_runner.go:130] >     },
	I0819 19:42:56.406939  470984 command_runner.go:130] >     {
	I0819 19:42:56.406947  470984 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 19:42:56.406951  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.406958  470984 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 19:42:56.406961  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406965  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.406972  470984 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 19:42:56.406980  470984 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 19:42:56.406984  470984 command_runner.go:130] >       ],
	I0819 19:42:56.406988  470984 command_runner.go:130] >       "size": "149009664",
	I0819 19:42:56.406992  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.406996  470984 command_runner.go:130] >         "value": "0"
	I0819 19:42:56.406999  470984 command_runner.go:130] >       },
	I0819 19:42:56.407003  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.407007  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.407011  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.407016  470984 command_runner.go:130] >     },
	I0819 19:42:56.407019  470984 command_runner.go:130] >     {
	I0819 19:42:56.407025  470984 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 19:42:56.407031  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.407035  470984 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 19:42:56.407041  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407045  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.407052  470984 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 19:42:56.407063  470984 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 19:42:56.407069  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407073  470984 command_runner.go:130] >       "size": "95233506",
	I0819 19:42:56.407077  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.407081  470984 command_runner.go:130] >         "value": "0"
	I0819 19:42:56.407084  470984 command_runner.go:130] >       },
	I0819 19:42:56.407088  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.407094  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.407098  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.407101  470984 command_runner.go:130] >     },
	I0819 19:42:56.407105  470984 command_runner.go:130] >     {
	I0819 19:42:56.407111  470984 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 19:42:56.407118  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.407123  470984 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 19:42:56.407127  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407131  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.407145  470984 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 19:42:56.407155  470984 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 19:42:56.407159  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407163  470984 command_runner.go:130] >       "size": "89437512",
	I0819 19:42:56.407169  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.407173  470984 command_runner.go:130] >         "value": "0"
	I0819 19:42:56.407178  470984 command_runner.go:130] >       },
	I0819 19:42:56.407183  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.407188  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.407193  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.407198  470984 command_runner.go:130] >     },
	I0819 19:42:56.407202  470984 command_runner.go:130] >     {
	I0819 19:42:56.407210  470984 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 19:42:56.407215  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.407219  470984 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 19:42:56.407225  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407229  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.407236  470984 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 19:42:56.407245  470984 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 19:42:56.407249  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407253  470984 command_runner.go:130] >       "size": "92728217",
	I0819 19:42:56.407259  470984 command_runner.go:130] >       "uid": null,
	I0819 19:42:56.407263  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.407269  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.407273  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.407276  470984 command_runner.go:130] >     },
	I0819 19:42:56.407280  470984 command_runner.go:130] >     {
	I0819 19:42:56.407288  470984 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 19:42:56.407294  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.407299  470984 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 19:42:56.407304  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407308  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.407317  470984 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 19:42:56.407326  470984 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 19:42:56.407332  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407336  470984 command_runner.go:130] >       "size": "68420936",
	I0819 19:42:56.407340  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.407344  470984 command_runner.go:130] >         "value": "0"
	I0819 19:42:56.407347  470984 command_runner.go:130] >       },
	I0819 19:42:56.407351  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.407355  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.407359  470984 command_runner.go:130] >       "pinned": false
	I0819 19:42:56.407362  470984 command_runner.go:130] >     },
	I0819 19:42:56.407365  470984 command_runner.go:130] >     {
	I0819 19:42:56.407371  470984 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 19:42:56.407377  470984 command_runner.go:130] >       "repoTags": [
	I0819 19:42:56.407381  470984 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 19:42:56.407385  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407392  470984 command_runner.go:130] >       "repoDigests": [
	I0819 19:42:56.407399  470984 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 19:42:56.407407  470984 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 19:42:56.407411  470984 command_runner.go:130] >       ],
	I0819 19:42:56.407415  470984 command_runner.go:130] >       "size": "742080",
	I0819 19:42:56.407421  470984 command_runner.go:130] >       "uid": {
	I0819 19:42:56.407425  470984 command_runner.go:130] >         "value": "65535"
	I0819 19:42:56.407430  470984 command_runner.go:130] >       },
	I0819 19:42:56.407434  470984 command_runner.go:130] >       "username": "",
	I0819 19:42:56.407438  470984 command_runner.go:130] >       "spec": null,
	I0819 19:42:56.407444  470984 command_runner.go:130] >       "pinned": true
	I0819 19:42:56.407447  470984 command_runner.go:130] >     }
	I0819 19:42:56.407451  470984 command_runner.go:130] >   ]
	I0819 19:42:56.407454  470984 command_runner.go:130] > }
	I0819 19:42:56.407578  470984 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:42:56.407591  470984 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:42:56.407598  470984 kubeadm.go:934] updating node { 192.168.39.35 8443 v1.31.0 crio true true} ...
	I0819 19:42:56.407704  470984 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-548379 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.35
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-548379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:42:56.407766  470984 ssh_runner.go:195] Run: crio config
	I0819 19:42:56.441110  470984 command_runner.go:130] ! time="2024-08-19 19:42:56.404175179Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0819 19:42:56.447657  470984 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0819 19:42:56.457302  470984 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0819 19:42:56.457329  470984 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0819 19:42:56.457339  470984 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0819 19:42:56.457344  470984 command_runner.go:130] > #
	I0819 19:42:56.457354  470984 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0819 19:42:56.457363  470984 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0819 19:42:56.457372  470984 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0819 19:42:56.457381  470984 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0819 19:42:56.457389  470984 command_runner.go:130] > # reload'.
	I0819 19:42:56.457397  470984 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0819 19:42:56.457408  470984 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0819 19:42:56.457419  470984 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0819 19:42:56.457432  470984 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0819 19:42:56.457440  470984 command_runner.go:130] > [crio]
	I0819 19:42:56.457450  470984 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0819 19:42:56.457460  470984 command_runner.go:130] > # containers images, in this directory.
	I0819 19:42:56.457470  470984 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0819 19:42:56.457482  470984 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0819 19:42:56.457490  470984 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0819 19:42:56.457500  470984 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0819 19:42:56.457510  470984 command_runner.go:130] > # imagestore = ""
	I0819 19:42:56.457519  470984 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0819 19:42:56.457527  470984 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0819 19:42:56.457533  470984 command_runner.go:130] > storage_driver = "overlay"
	I0819 19:42:56.457539  470984 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0819 19:42:56.457547  470984 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0819 19:42:56.457553  470984 command_runner.go:130] > storage_option = [
	I0819 19:42:56.457558  470984 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0819 19:42:56.457563  470984 command_runner.go:130] > ]
	I0819 19:42:56.457570  470984 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0819 19:42:56.457578  470984 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0819 19:42:56.457584  470984 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0819 19:42:56.457590  470984 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0819 19:42:56.457598  470984 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0819 19:42:56.457603  470984 command_runner.go:130] > # always happen on a node reboot
	I0819 19:42:56.457610  470984 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0819 19:42:56.457626  470984 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0819 19:42:56.457638  470984 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0819 19:42:56.457649  470984 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0819 19:42:56.457660  470984 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0819 19:42:56.457674  470984 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0819 19:42:56.457690  470984 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0819 19:42:56.457700  470984 command_runner.go:130] > # internal_wipe = true
	I0819 19:42:56.457713  470984 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0819 19:42:56.457721  470984 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0819 19:42:56.457728  470984 command_runner.go:130] > # internal_repair = false
	I0819 19:42:56.457734  470984 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0819 19:42:56.457747  470984 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0819 19:42:56.457755  470984 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0819 19:42:56.457764  470984 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0819 19:42:56.457772  470984 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0819 19:42:56.457777  470984 command_runner.go:130] > [crio.api]
	I0819 19:42:56.457784  470984 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0819 19:42:56.457791  470984 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0819 19:42:56.457796  470984 command_runner.go:130] > # IP address on which the stream server will listen.
	I0819 19:42:56.457803  470984 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0819 19:42:56.457809  470984 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0819 19:42:56.457817  470984 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0819 19:42:56.457821  470984 command_runner.go:130] > # stream_port = "0"
	I0819 19:42:56.457828  470984 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0819 19:42:56.457832  470984 command_runner.go:130] > # stream_enable_tls = false
	I0819 19:42:56.457840  470984 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0819 19:42:56.457845  470984 command_runner.go:130] > # stream_idle_timeout = ""
	I0819 19:42:56.457853  470984 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0819 19:42:56.457861  470984 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0819 19:42:56.457867  470984 command_runner.go:130] > # minutes.
	I0819 19:42:56.457872  470984 command_runner.go:130] > # stream_tls_cert = ""
	I0819 19:42:56.457879  470984 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0819 19:42:56.457885  470984 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0819 19:42:56.457891  470984 command_runner.go:130] > # stream_tls_key = ""
	I0819 19:42:56.457900  470984 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0819 19:42:56.457908  470984 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0819 19:42:56.457925  470984 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0819 19:42:56.457931  470984 command_runner.go:130] > # stream_tls_ca = ""
	I0819 19:42:56.457939  470984 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 19:42:56.457946  470984 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0819 19:42:56.457953  470984 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 19:42:56.457959  470984 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0819 19:42:56.457966  470984 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0819 19:42:56.457973  470984 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0819 19:42:56.457977  470984 command_runner.go:130] > [crio.runtime]
	I0819 19:42:56.457985  470984 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0819 19:42:56.457993  470984 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0819 19:42:56.457997  470984 command_runner.go:130] > # "nofile=1024:2048"
	I0819 19:42:56.458005  470984 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0819 19:42:56.458010  470984 command_runner.go:130] > # default_ulimits = [
	I0819 19:42:56.458013  470984 command_runner.go:130] > # ]
	I0819 19:42:56.458021  470984 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0819 19:42:56.458027  470984 command_runner.go:130] > # no_pivot = false
	I0819 19:42:56.458033  470984 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0819 19:42:56.458042  470984 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0819 19:42:56.458048  470984 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0819 19:42:56.458054  470984 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0819 19:42:56.458060  470984 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0819 19:42:56.458066  470984 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 19:42:56.458073  470984 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0819 19:42:56.458077  470984 command_runner.go:130] > # Cgroup setting for conmon
	I0819 19:42:56.458086  470984 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0819 19:42:56.458092  470984 command_runner.go:130] > conmon_cgroup = "pod"
	I0819 19:42:56.458098  470984 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0819 19:42:56.458104  470984 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0819 19:42:56.458111  470984 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 19:42:56.458117  470984 command_runner.go:130] > conmon_env = [
	I0819 19:42:56.458122  470984 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 19:42:56.458127  470984 command_runner.go:130] > ]
	I0819 19:42:56.458132  470984 command_runner.go:130] > # Additional environment variables to set for all the
	I0819 19:42:56.458140  470984 command_runner.go:130] > # containers. These are overridden if set in the
	I0819 19:42:56.458146  470984 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0819 19:42:56.458152  470984 command_runner.go:130] > # default_env = [
	I0819 19:42:56.458155  470984 command_runner.go:130] > # ]
	I0819 19:42:56.458162  470984 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0819 19:42:56.458170  470984 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0819 19:42:56.458175  470984 command_runner.go:130] > # selinux = false
	I0819 19:42:56.458182  470984 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0819 19:42:56.458190  470984 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0819 19:42:56.458195  470984 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0819 19:42:56.458201  470984 command_runner.go:130] > # seccomp_profile = ""
	I0819 19:42:56.458206  470984 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0819 19:42:56.458214  470984 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0819 19:42:56.458219  470984 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0819 19:42:56.458226  470984 command_runner.go:130] > # which might increase security.
	I0819 19:42:56.458230  470984 command_runner.go:130] > # This option is currently deprecated,
	I0819 19:42:56.458235  470984 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0819 19:42:56.458243  470984 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0819 19:42:56.458250  470984 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0819 19:42:56.458258  470984 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0819 19:42:56.458266  470984 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0819 19:42:56.458271  470984 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0819 19:42:56.458278  470984 command_runner.go:130] > # This option supports live configuration reload.
	I0819 19:42:56.458283  470984 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0819 19:42:56.458290  470984 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0819 19:42:56.458296  470984 command_runner.go:130] > # the cgroup blockio controller.
	I0819 19:42:56.458302  470984 command_runner.go:130] > # blockio_config_file = ""
	I0819 19:42:56.458308  470984 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0819 19:42:56.458314  470984 command_runner.go:130] > # blockio parameters.
	I0819 19:42:56.458318  470984 command_runner.go:130] > # blockio_reload = false
	I0819 19:42:56.458324  470984 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0819 19:42:56.458330  470984 command_runner.go:130] > # irqbalance daemon.
	I0819 19:42:56.458335  470984 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0819 19:42:56.458343  470984 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0819 19:42:56.458352  470984 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0819 19:42:56.458361  470984 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0819 19:42:56.458369  470984 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0819 19:42:56.458377  470984 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0819 19:42:56.458382  470984 command_runner.go:130] > # This option supports live configuration reload.
	I0819 19:42:56.458388  470984 command_runner.go:130] > # rdt_config_file = ""
	I0819 19:42:56.458393  470984 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0819 19:42:56.458399  470984 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0819 19:42:56.458415  470984 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0819 19:42:56.458422  470984 command_runner.go:130] > # separate_pull_cgroup = ""
	I0819 19:42:56.458428  470984 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0819 19:42:56.458435  470984 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0819 19:42:56.458440  470984 command_runner.go:130] > # will be added.
	I0819 19:42:56.458444  470984 command_runner.go:130] > # default_capabilities = [
	I0819 19:42:56.458450  470984 command_runner.go:130] > # 	"CHOWN",
	I0819 19:42:56.458454  470984 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0819 19:42:56.458460  470984 command_runner.go:130] > # 	"FSETID",
	I0819 19:42:56.458464  470984 command_runner.go:130] > # 	"FOWNER",
	I0819 19:42:56.458469  470984 command_runner.go:130] > # 	"SETGID",
	I0819 19:42:56.458473  470984 command_runner.go:130] > # 	"SETUID",
	I0819 19:42:56.458478  470984 command_runner.go:130] > # 	"SETPCAP",
	I0819 19:42:56.458482  470984 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0819 19:42:56.458488  470984 command_runner.go:130] > # 	"KILL",
	I0819 19:42:56.458492  470984 command_runner.go:130] > # ]
	I0819 19:42:56.458502  470984 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0819 19:42:56.458510  470984 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0819 19:42:56.458518  470984 command_runner.go:130] > # add_inheritable_capabilities = false
	I0819 19:42:56.458525  470984 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0819 19:42:56.458533  470984 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 19:42:56.458538  470984 command_runner.go:130] > default_sysctls = [
	I0819 19:42:56.458543  470984 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0819 19:42:56.458548  470984 command_runner.go:130] > ]
	I0819 19:42:56.458553  470984 command_runner.go:130] > # List of devices on the host that a
	I0819 19:42:56.458561  470984 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0819 19:42:56.458568  470984 command_runner.go:130] > # allowed_devices = [
	I0819 19:42:56.458572  470984 command_runner.go:130] > # 	"/dev/fuse",
	I0819 19:42:56.458577  470984 command_runner.go:130] > # ]
	I0819 19:42:56.458581  470984 command_runner.go:130] > # List of additional devices. specified as
	I0819 19:42:56.458590  470984 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0819 19:42:56.458597  470984 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0819 19:42:56.458603  470984 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 19:42:56.458612  470984 command_runner.go:130] > # additional_devices = [
	I0819 19:42:56.458620  470984 command_runner.go:130] > # ]
	I0819 19:42:56.458628  470984 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0819 19:42:56.458636  470984 command_runner.go:130] > # cdi_spec_dirs = [
	I0819 19:42:56.458645  470984 command_runner.go:130] > # 	"/etc/cdi",
	I0819 19:42:56.458652  470984 command_runner.go:130] > # 	"/var/run/cdi",
	I0819 19:42:56.458660  470984 command_runner.go:130] > # ]
	I0819 19:42:56.458672  470984 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0819 19:42:56.458683  470984 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0819 19:42:56.458689  470984 command_runner.go:130] > # Defaults to false.
	I0819 19:42:56.458694  470984 command_runner.go:130] > # device_ownership_from_security_context = false
	I0819 19:42:56.458702  470984 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0819 19:42:56.458711  470984 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0819 19:42:56.458715  470984 command_runner.go:130] > # hooks_dir = [
	I0819 19:42:56.458720  470984 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0819 19:42:56.458728  470984 command_runner.go:130] > # ]
	I0819 19:42:56.458737  470984 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0819 19:42:56.458751  470984 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0819 19:42:56.458759  470984 command_runner.go:130] > # its default mounts from the following two files:
	I0819 19:42:56.458762  470984 command_runner.go:130] > #
	I0819 19:42:56.458768  470984 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0819 19:42:56.458776  470984 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0819 19:42:56.458784  470984 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0819 19:42:56.458788  470984 command_runner.go:130] > #
	I0819 19:42:56.458796  470984 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0819 19:42:56.458804  470984 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0819 19:42:56.458811  470984 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0819 19:42:56.458819  470984 command_runner.go:130] > #      only add mounts it finds in this file.
	I0819 19:42:56.458822  470984 command_runner.go:130] > #
	I0819 19:42:56.458826  470984 command_runner.go:130] > # default_mounts_file = ""
	I0819 19:42:56.458834  470984 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0819 19:42:56.458840  470984 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0819 19:42:56.458845  470984 command_runner.go:130] > pids_limit = 1024
	I0819 19:42:56.458851  470984 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0819 19:42:56.458858  470984 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0819 19:42:56.458864  470984 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0819 19:42:56.458874  470984 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0819 19:42:56.458880  470984 command_runner.go:130] > # log_size_max = -1
	I0819 19:42:56.458887  470984 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0819 19:42:56.458893  470984 command_runner.go:130] > # log_to_journald = false
	I0819 19:42:56.458899  470984 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0819 19:42:56.458907  470984 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0819 19:42:56.458912  470984 command_runner.go:130] > # Path to directory for container attach sockets.
	I0819 19:42:56.458919  470984 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0819 19:42:56.458924  470984 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0819 19:42:56.458930  470984 command_runner.go:130] > # bind_mount_prefix = ""
	I0819 19:42:56.458935  470984 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0819 19:42:56.458942  470984 command_runner.go:130] > # read_only = false
	I0819 19:42:56.458947  470984 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0819 19:42:56.458960  470984 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0819 19:42:56.458966  470984 command_runner.go:130] > # live configuration reload.
	I0819 19:42:56.458970  470984 command_runner.go:130] > # log_level = "info"
	I0819 19:42:56.458978  470984 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0819 19:42:56.458986  470984 command_runner.go:130] > # This option supports live configuration reload.
	I0819 19:42:56.458990  470984 command_runner.go:130] > # log_filter = ""
	I0819 19:42:56.458998  470984 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0819 19:42:56.459005  470984 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0819 19:42:56.459012  470984 command_runner.go:130] > # separated by comma.
	I0819 19:42:56.459019  470984 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 19:42:56.459026  470984 command_runner.go:130] > # uid_mappings = ""
	I0819 19:42:56.459031  470984 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0819 19:42:56.459039  470984 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0819 19:42:56.459043  470984 command_runner.go:130] > # separated by comma.
	I0819 19:42:56.459051  470984 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 19:42:56.459057  470984 command_runner.go:130] > # gid_mappings = ""
	I0819 19:42:56.459063  470984 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0819 19:42:56.459071  470984 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 19:42:56.459079  470984 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 19:42:56.459086  470984 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 19:42:56.459092  470984 command_runner.go:130] > # minimum_mappable_uid = -1
	I0819 19:42:56.459098  470984 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0819 19:42:56.459107  470984 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 19:42:56.459113  470984 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 19:42:56.459121  470984 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 19:42:56.459128  470984 command_runner.go:130] > # minimum_mappable_gid = -1
	I0819 19:42:56.459134  470984 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0819 19:42:56.459141  470984 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0819 19:42:56.459146  470984 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0819 19:42:56.459152  470984 command_runner.go:130] > # ctr_stop_timeout = 30
	I0819 19:42:56.459158  470984 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0819 19:42:56.459166  470984 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0819 19:42:56.459171  470984 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0819 19:42:56.459178  470984 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0819 19:42:56.459182  470984 command_runner.go:130] > drop_infra_ctr = false
	I0819 19:42:56.459190  470984 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0819 19:42:56.459197  470984 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0819 19:42:56.459204  470984 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0819 19:42:56.459211  470984 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0819 19:42:56.459218  470984 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0819 19:42:56.459226  470984 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0819 19:42:56.459231  470984 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0819 19:42:56.459238  470984 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0819 19:42:56.459242  470984 command_runner.go:130] > # shared_cpuset = ""
	I0819 19:42:56.459247  470984 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0819 19:42:56.459254  470984 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0819 19:42:56.459259  470984 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0819 19:42:56.459265  470984 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0819 19:42:56.459271  470984 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0819 19:42:56.459276  470984 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0819 19:42:56.459284  470984 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0819 19:42:56.459288  470984 command_runner.go:130] > # enable_criu_support = false
	I0819 19:42:56.459294  470984 command_runner.go:130] > # Enable/disable the generation of the container,
	I0819 19:42:56.459302  470984 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0819 19:42:56.459306  470984 command_runner.go:130] > # enable_pod_events = false
	I0819 19:42:56.459314  470984 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 19:42:56.459320  470984 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 19:42:56.459327  470984 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0819 19:42:56.459331  470984 command_runner.go:130] > # default_runtime = "runc"
	I0819 19:42:56.459338  470984 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0819 19:42:56.459345  470984 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0819 19:42:56.459356  470984 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0819 19:42:56.459364  470984 command_runner.go:130] > # creation as a file is not desired either.
	I0819 19:42:56.459371  470984 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0819 19:42:56.459379  470984 command_runner.go:130] > # the hostname is being managed dynamically.
	I0819 19:42:56.459383  470984 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0819 19:42:56.459389  470984 command_runner.go:130] > # ]
	I0819 19:42:56.459395  470984 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0819 19:42:56.459403  470984 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0819 19:42:56.459410  470984 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0819 19:42:56.459417  470984 command_runner.go:130] > # Each entry in the table should follow the format:
	I0819 19:42:56.459420  470984 command_runner.go:130] > #
	I0819 19:42:56.459426  470984 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0819 19:42:56.459431  470984 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0819 19:42:56.459457  470984 command_runner.go:130] > # runtime_type = "oci"
	I0819 19:42:56.459464  470984 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0819 19:42:56.459468  470984 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0819 19:42:56.459475  470984 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0819 19:42:56.459479  470984 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0819 19:42:56.459485  470984 command_runner.go:130] > # monitor_env = []
	I0819 19:42:56.459490  470984 command_runner.go:130] > # privileged_without_host_devices = false
	I0819 19:42:56.459497  470984 command_runner.go:130] > # allowed_annotations = []
	I0819 19:42:56.459503  470984 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0819 19:42:56.459509  470984 command_runner.go:130] > # Where:
	I0819 19:42:56.459514  470984 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0819 19:42:56.459522  470984 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0819 19:42:56.459531  470984 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0819 19:42:56.459539  470984 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0819 19:42:56.459543  470984 command_runner.go:130] > #   in $PATH.
	I0819 19:42:56.459550  470984 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0819 19:42:56.459556  470984 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0819 19:42:56.459563  470984 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0819 19:42:56.459568  470984 command_runner.go:130] > #   state.
	I0819 19:42:56.459574  470984 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0819 19:42:56.459582  470984 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0819 19:42:56.459591  470984 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0819 19:42:56.459598  470984 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0819 19:42:56.459604  470984 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0819 19:42:56.459616  470984 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0819 19:42:56.459627  470984 command_runner.go:130] > #   The currently recognized values are:
	I0819 19:42:56.459639  470984 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0819 19:42:56.459653  470984 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0819 19:42:56.459665  470984 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0819 19:42:56.459677  470984 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0819 19:42:56.459691  470984 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0819 19:42:56.459701  470984 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0819 19:42:56.459709  470984 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0819 19:42:56.459718  470984 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0819 19:42:56.459725  470984 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0819 19:42:56.459734  470984 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0819 19:42:56.459745  470984 command_runner.go:130] > #   deprecated option "conmon".
	I0819 19:42:56.459754  470984 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0819 19:42:56.459762  470984 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0819 19:42:56.459768  470984 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0819 19:42:56.459775  470984 command_runner.go:130] > #   should be moved to the container's cgroup
	I0819 19:42:56.459781  470984 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0819 19:42:56.459788  470984 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0819 19:42:56.459794  470984 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0819 19:42:56.459801  470984 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0819 19:42:56.459804  470984 command_runner.go:130] > #
	I0819 19:42:56.459810  470984 command_runner.go:130] > # Using the seccomp notifier feature:
	I0819 19:42:56.459814  470984 command_runner.go:130] > #
	I0819 19:42:56.459820  470984 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0819 19:42:56.459828  470984 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0819 19:42:56.459832  470984 command_runner.go:130] > #
	I0819 19:42:56.459838  470984 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0819 19:42:56.459846  470984 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0819 19:42:56.459849  470984 command_runner.go:130] > #
	I0819 19:42:56.459855  470984 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0819 19:42:56.459860  470984 command_runner.go:130] > # feature.
	I0819 19:42:56.459863  470984 command_runner.go:130] > #
	I0819 19:42:56.459869  470984 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0819 19:42:56.459877  470984 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0819 19:42:56.459884  470984 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0819 19:42:56.459892  470984 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0819 19:42:56.459900  470984 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0819 19:42:56.459905  470984 command_runner.go:130] > #
	I0819 19:42:56.459911  470984 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0819 19:42:56.459919  470984 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0819 19:42:56.459923  470984 command_runner.go:130] > #
	I0819 19:42:56.459929  470984 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0819 19:42:56.459936  470984 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0819 19:42:56.459939  470984 command_runner.go:130] > #
	I0819 19:42:56.459945  470984 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0819 19:42:56.459953  470984 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0819 19:42:56.459957  470984 command_runner.go:130] > # limitation.
	I0819 19:42:56.459966  470984 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0819 19:42:56.459972  470984 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0819 19:42:56.459976  470984 command_runner.go:130] > runtime_type = "oci"
	I0819 19:42:56.459982  470984 command_runner.go:130] > runtime_root = "/run/runc"
	I0819 19:42:56.459986  470984 command_runner.go:130] > runtime_config_path = ""
	I0819 19:42:56.459992  470984 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0819 19:42:56.459998  470984 command_runner.go:130] > monitor_cgroup = "pod"
	I0819 19:42:56.460006  470984 command_runner.go:130] > monitor_exec_cgroup = ""
	I0819 19:42:56.460014  470984 command_runner.go:130] > monitor_env = [
	I0819 19:42:56.460027  470984 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 19:42:56.460033  470984 command_runner.go:130] > ]
	I0819 19:42:56.460040  470984 command_runner.go:130] > privileged_without_host_devices = false
	I0819 19:42:56.460052  470984 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0819 19:42:56.460062  470984 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0819 19:42:56.460074  470984 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0819 19:42:56.460088  470984 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0819 19:42:56.460098  470984 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0819 19:42:56.460107  470984 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0819 19:42:56.460117  470984 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0819 19:42:56.460126  470984 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0819 19:42:56.460132  470984 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0819 19:42:56.460139  470984 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0819 19:42:56.460142  470984 command_runner.go:130] > # Example:
	I0819 19:42:56.460147  470984 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0819 19:42:56.460151  470984 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0819 19:42:56.460156  470984 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0819 19:42:56.460161  470984 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0819 19:42:56.460164  470984 command_runner.go:130] > # cpuset = 0
	I0819 19:42:56.460168  470984 command_runner.go:130] > # cpushares = "0-1"
	I0819 19:42:56.460171  470984 command_runner.go:130] > # Where:
	I0819 19:42:56.460176  470984 command_runner.go:130] > # The workload name is workload-type.
	I0819 19:42:56.460182  470984 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0819 19:42:56.460187  470984 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0819 19:42:56.460192  470984 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0819 19:42:56.460199  470984 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0819 19:42:56.460205  470984 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0819 19:42:56.460211  470984 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0819 19:42:56.460217  470984 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0819 19:42:56.460221  470984 command_runner.go:130] > # Default value is set to true
	I0819 19:42:56.460225  470984 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0819 19:42:56.460230  470984 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0819 19:42:56.460235  470984 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0819 19:42:56.460239  470984 command_runner.go:130] > # Default value is set to 'false'
	I0819 19:42:56.460243  470984 command_runner.go:130] > # disable_hostport_mapping = false
	I0819 19:42:56.460249  470984 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0819 19:42:56.460252  470984 command_runner.go:130] > #
	I0819 19:42:56.460257  470984 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0819 19:42:56.460263  470984 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0819 19:42:56.460269  470984 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0819 19:42:56.460274  470984 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0819 19:42:56.460280  470984 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0819 19:42:56.460283  470984 command_runner.go:130] > [crio.image]
	I0819 19:42:56.460289  470984 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0819 19:42:56.460293  470984 command_runner.go:130] > # default_transport = "docker://"
	I0819 19:42:56.460299  470984 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0819 19:42:56.460305  470984 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0819 19:42:56.460313  470984 command_runner.go:130] > # global_auth_file = ""
	I0819 19:42:56.460319  470984 command_runner.go:130] > # The image used to instantiate infra containers.
	I0819 19:42:56.460326  470984 command_runner.go:130] > # This option supports live configuration reload.
	I0819 19:42:56.460331  470984 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0819 19:42:56.460339  470984 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0819 19:42:56.460346  470984 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0819 19:42:56.460353  470984 command_runner.go:130] > # This option supports live configuration reload.
	I0819 19:42:56.460357  470984 command_runner.go:130] > # pause_image_auth_file = ""
	I0819 19:42:56.460365  470984 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0819 19:42:56.460371  470984 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0819 19:42:56.460377  470984 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0819 19:42:56.460385  470984 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0819 19:42:56.460391  470984 command_runner.go:130] > # pause_command = "/pause"
	I0819 19:42:56.460396  470984 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0819 19:42:56.460404  470984 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0819 19:42:56.460410  470984 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0819 19:42:56.460420  470984 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0819 19:42:56.460428  470984 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0819 19:42:56.460434  470984 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0819 19:42:56.460439  470984 command_runner.go:130] > # pinned_images = [
	I0819 19:42:56.460443  470984 command_runner.go:130] > # ]
	I0819 19:42:56.460450  470984 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0819 19:42:56.460457  470984 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0819 19:42:56.460463  470984 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0819 19:42:56.460471  470984 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0819 19:42:56.460478  470984 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0819 19:42:56.460484  470984 command_runner.go:130] > # signature_policy = ""
	I0819 19:42:56.460490  470984 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0819 19:42:56.460501  470984 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0819 19:42:56.460509  470984 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0819 19:42:56.460515  470984 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0819 19:42:56.460523  470984 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0819 19:42:56.460528  470984 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0819 19:42:56.460535  470984 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0819 19:42:56.460541  470984 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0819 19:42:56.460547  470984 command_runner.go:130] > # changing them here.
	I0819 19:42:56.460551  470984 command_runner.go:130] > # insecure_registries = [
	I0819 19:42:56.460556  470984 command_runner.go:130] > # ]
	I0819 19:42:56.460562  470984 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0819 19:42:56.460569  470984 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0819 19:42:56.460573  470984 command_runner.go:130] > # image_volumes = "mkdir"
	I0819 19:42:56.460579  470984 command_runner.go:130] > # Temporary directory to use for storing big files
	I0819 19:42:56.460583  470984 command_runner.go:130] > # big_files_temporary_dir = ""
	I0819 19:42:56.460591  470984 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0819 19:42:56.460596  470984 command_runner.go:130] > # CNI plugins.
	I0819 19:42:56.460599  470984 command_runner.go:130] > [crio.network]
	I0819 19:42:56.460608  470984 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0819 19:42:56.460618  470984 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0819 19:42:56.460627  470984 command_runner.go:130] > # cni_default_network = ""
	I0819 19:42:56.460638  470984 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0819 19:42:56.460648  470984 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0819 19:42:56.460659  470984 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0819 19:42:56.460668  470984 command_runner.go:130] > # plugin_dirs = [
	I0819 19:42:56.460676  470984 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0819 19:42:56.460681  470984 command_runner.go:130] > # ]
	I0819 19:42:56.460691  470984 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0819 19:42:56.460700  470984 command_runner.go:130] > [crio.metrics]
	I0819 19:42:56.460710  470984 command_runner.go:130] > # Globally enable or disable metrics support.
	I0819 19:42:56.460716  470984 command_runner.go:130] > enable_metrics = true
	I0819 19:42:56.460724  470984 command_runner.go:130] > # Specify enabled metrics collectors.
	I0819 19:42:56.460729  470984 command_runner.go:130] > # Per default all metrics are enabled.
	I0819 19:42:56.460737  470984 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0819 19:42:56.460747  470984 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0819 19:42:56.460755  470984 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0819 19:42:56.460759  470984 command_runner.go:130] > # metrics_collectors = [
	I0819 19:42:56.460765  470984 command_runner.go:130] > # 	"operations",
	I0819 19:42:56.460770  470984 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0819 19:42:56.460776  470984 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0819 19:42:56.460781  470984 command_runner.go:130] > # 	"operations_errors",
	I0819 19:42:56.460787  470984 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0819 19:42:56.460791  470984 command_runner.go:130] > # 	"image_pulls_by_name",
	I0819 19:42:56.460797  470984 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0819 19:42:56.460801  470984 command_runner.go:130] > # 	"image_pulls_failures",
	I0819 19:42:56.460806  470984 command_runner.go:130] > # 	"image_pulls_successes",
	I0819 19:42:56.460812  470984 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0819 19:42:56.460815  470984 command_runner.go:130] > # 	"image_layer_reuse",
	I0819 19:42:56.460822  470984 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0819 19:42:56.460826  470984 command_runner.go:130] > # 	"containers_oom_total",
	I0819 19:42:56.460832  470984 command_runner.go:130] > # 	"containers_oom",
	I0819 19:42:56.460836  470984 command_runner.go:130] > # 	"processes_defunct",
	I0819 19:42:56.460841  470984 command_runner.go:130] > # 	"operations_total",
	I0819 19:42:56.460845  470984 command_runner.go:130] > # 	"operations_latency_seconds",
	I0819 19:42:56.460850  470984 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0819 19:42:56.460856  470984 command_runner.go:130] > # 	"operations_errors_total",
	I0819 19:42:56.460859  470984 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0819 19:42:56.460866  470984 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0819 19:42:56.460871  470984 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0819 19:42:56.460877  470984 command_runner.go:130] > # 	"image_pulls_success_total",
	I0819 19:42:56.460882  470984 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0819 19:42:56.460889  470984 command_runner.go:130] > # 	"containers_oom_count_total",
	I0819 19:42:56.460894  470984 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0819 19:42:56.460900  470984 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0819 19:42:56.460904  470984 command_runner.go:130] > # ]
	I0819 19:42:56.460911  470984 command_runner.go:130] > # The port on which the metrics server will listen.
	I0819 19:42:56.460915  470984 command_runner.go:130] > # metrics_port = 9090
	I0819 19:42:56.460921  470984 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0819 19:42:56.460925  470984 command_runner.go:130] > # metrics_socket = ""
	I0819 19:42:56.460932  470984 command_runner.go:130] > # The certificate for the secure metrics server.
	I0819 19:42:56.460938  470984 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0819 19:42:56.460946  470984 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0819 19:42:56.460953  470984 command_runner.go:130] > # certificate on any modification event.
	I0819 19:42:56.460957  470984 command_runner.go:130] > # metrics_cert = ""
	I0819 19:42:56.460964  470984 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0819 19:42:56.460969  470984 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0819 19:42:56.460975  470984 command_runner.go:130] > # metrics_key = ""
	I0819 19:42:56.460980  470984 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0819 19:42:56.460986  470984 command_runner.go:130] > [crio.tracing]
	I0819 19:42:56.460992  470984 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0819 19:42:56.460998  470984 command_runner.go:130] > # enable_tracing = false
	I0819 19:42:56.461004  470984 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0819 19:42:56.461010  470984 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0819 19:42:56.461017  470984 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0819 19:42:56.461023  470984 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0819 19:42:56.461027  470984 command_runner.go:130] > # CRI-O NRI configuration.
	I0819 19:42:56.461032  470984 command_runner.go:130] > [crio.nri]
	I0819 19:42:56.461036  470984 command_runner.go:130] > # Globally enable or disable NRI.
	I0819 19:42:56.461040  470984 command_runner.go:130] > # enable_nri = false
	I0819 19:42:56.461045  470984 command_runner.go:130] > # NRI socket to listen on.
	I0819 19:42:56.461049  470984 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0819 19:42:56.461056  470984 command_runner.go:130] > # NRI plugin directory to use.
	I0819 19:42:56.461060  470984 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0819 19:42:56.461067  470984 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0819 19:42:56.461072  470984 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0819 19:42:56.461079  470984 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0819 19:42:56.461085  470984 command_runner.go:130] > # nri_disable_connections = false
	I0819 19:42:56.461092  470984 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0819 19:42:56.461096  470984 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0819 19:42:56.461102  470984 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0819 19:42:56.461108  470984 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0819 19:42:56.461114  470984 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0819 19:42:56.461120  470984 command_runner.go:130] > [crio.stats]
	I0819 19:42:56.461126  470984 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0819 19:42:56.461146  470984 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0819 19:42:56.461154  470984 command_runner.go:130] > # stats_collection_period = 0
	I0819 19:42:56.461297  470984 cni.go:84] Creating CNI manager for ""
	I0819 19:42:56.461309  470984 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 19:42:56.461319  470984 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:42:56.461341  470984 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.35 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-548379 NodeName:multinode-548379 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.35"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.35 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:42:56.461505  470984 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.35
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-548379"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.35
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.35"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:42:56.461571  470984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:42:56.472279  470984 command_runner.go:130] > kubeadm
	I0819 19:42:56.472310  470984 command_runner.go:130] > kubectl
	I0819 19:42:56.472314  470984 command_runner.go:130] > kubelet
	I0819 19:42:56.472358  470984 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:42:56.472418  470984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:42:56.482707  470984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0819 19:42:56.500172  470984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:42:56.516942  470984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0819 19:42:56.534736  470984 ssh_runner.go:195] Run: grep 192.168.39.35	control-plane.minikube.internal$ /etc/hosts
	I0819 19:42:56.538949  470984 command_runner.go:130] > 192.168.39.35	control-plane.minikube.internal
	I0819 19:42:56.539129  470984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:42:56.683931  470984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:42:56.698805  470984 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379 for IP: 192.168.39.35
	I0819 19:42:56.698831  470984 certs.go:194] generating shared ca certs ...
	I0819 19:42:56.698850  470984 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:42:56.699010  470984 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 19:42:56.699046  470984 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 19:42:56.699056  470984 certs.go:256] generating profile certs ...
	I0819 19:42:56.699126  470984 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/client.key
	I0819 19:42:56.699179  470984 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/apiserver.key.1a7a4ed8
	I0819 19:42:56.699215  470984 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/proxy-client.key
	I0819 19:42:56.699226  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 19:42:56.699237  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 19:42:56.699249  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 19:42:56.699258  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 19:42:56.699270  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 19:42:56.699282  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 19:42:56.699294  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 19:42:56.699304  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 19:42:56.699406  470984 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 19:42:56.699439  470984 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 19:42:56.699448  470984 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 19:42:56.699470  470984 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:42:56.699500  470984 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:42:56.699521  470984 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 19:42:56.699557  470984 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:42:56.699585  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> /usr/share/ca-certificates/4381592.pem
	I0819 19:42:56.699600  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:42:56.699612  470984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem -> /usr/share/ca-certificates/438159.pem
	I0819 19:42:56.700207  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:42:56.728401  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:42:56.752971  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:42:56.778598  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 19:42:56.802950  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 19:42:56.827536  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:42:56.851614  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:42:56.876900  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/multinode-548379/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 19:42:56.901555  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 19:42:56.926681  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:42:56.951824  470984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 19:42:56.977393  470984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:42:56.994580  470984 ssh_runner.go:195] Run: openssl version
	I0819 19:42:57.000345  470984 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0819 19:42:57.000428  470984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 19:42:57.011544  470984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 19:42:57.016559  470984 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 19:42:57.016611  470984 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 19:42:57.016667  470984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 19:42:57.022423  470984 command_runner.go:130] > 3ec20f2e
	I0819 19:42:57.022519  470984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:42:57.032613  470984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:42:57.043983  470984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:42:57.048851  470984 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:42:57.048888  470984 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:42:57.048951  470984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:42:57.054784  470984 command_runner.go:130] > b5213941
	I0819 19:42:57.054879  470984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:42:57.065118  470984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 19:42:57.076449  470984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 19:42:57.081102  470984 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 19:42:57.081170  470984 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 19:42:57.081228  470984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 19:42:57.087325  470984 command_runner.go:130] > 51391683
	I0819 19:42:57.087411  470984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 19:42:57.097254  470984 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:42:57.102107  470984 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:42:57.102147  470984 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0819 19:42:57.102156  470984 command_runner.go:130] > Device: 253,1	Inode: 4197398     Links: 1
	I0819 19:42:57.102164  470984 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 19:42:57.102170  470984 command_runner.go:130] > Access: 2024-08-19 19:36:20.743248210 +0000
	I0819 19:42:57.102175  470984 command_runner.go:130] > Modify: 2024-08-19 19:36:20.743248210 +0000
	I0819 19:42:57.102179  470984 command_runner.go:130] > Change: 2024-08-19 19:36:20.743248210 +0000
	I0819 19:42:57.102184  470984 command_runner.go:130] >  Birth: 2024-08-19 19:36:20.743248210 +0000
	I0819 19:42:57.102242  470984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:42:57.108203  470984 command_runner.go:130] > Certificate will not expire
	I0819 19:42:57.108288  470984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:42:57.114225  470984 command_runner.go:130] > Certificate will not expire
	I0819 19:42:57.114311  470984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:42:57.120181  470984 command_runner.go:130] > Certificate will not expire
	I0819 19:42:57.120280  470984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:42:57.126088  470984 command_runner.go:130] > Certificate will not expire
	I0819 19:42:57.126186  470984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:42:57.132018  470984 command_runner.go:130] > Certificate will not expire
	I0819 19:42:57.132129  470984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:42:57.138676  470984 command_runner.go:130] > Certificate will not expire
	I0819 19:42:57.138933  470984 kubeadm.go:392] StartCluster: {Name:multinode-548379 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-548379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.197 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:42:57.139052  470984 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:42:57.139120  470984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:42:57.176688  470984 command_runner.go:130] > f88e348b3a84742ec396ed84b9260f8708967811698fae9424a4e5c6f73eb9ec
	I0819 19:42:57.176721  470984 command_runner.go:130] > 0adbbf88e8e6f5c8f99f70f167d0be23685b6efe52beb7c096a3d94e98e27772
	I0819 19:42:57.176727  470984 command_runner.go:130] > 3947c41c8021a850328a6ef431f4aa7c273b46da629f27172dd9594e95a309e8
	I0819 19:42:57.176734  470984 command_runner.go:130] > 8cde9d50116f223ef6493538f5ae20918c477628d4127a2ac98182f64446395b
	I0819 19:42:57.176739  470984 command_runner.go:130] > 97b3ea7b8c2ce2841b0ee7f45eca09f3d59b14fbb72dd20976dd9581f6ecc942
	I0819 19:42:57.176744  470984 command_runner.go:130] > e83dd57bfe6d4f267dc1e80968e066a1107866708e1ed336dd94dde65dc97994
	I0819 19:42:57.176750  470984 command_runner.go:130] > 24eff3c1f0c13300bdb7581b9cafdc8c7c9bea7e77c52fc6491f4e3055c7eea1
	I0819 19:42:57.176757  470984 command_runner.go:130] > 0c3794f7593119cdb9a488c4223a840b2b1089549a68b2866400eac7c25f9ec4
	I0819 19:42:57.178456  470984 cri.go:89] found id: "f88e348b3a84742ec396ed84b9260f8708967811698fae9424a4e5c6f73eb9ec"
	I0819 19:42:57.178480  470984 cri.go:89] found id: "0adbbf88e8e6f5c8f99f70f167d0be23685b6efe52beb7c096a3d94e98e27772"
	I0819 19:42:57.178484  470984 cri.go:89] found id: "3947c41c8021a850328a6ef431f4aa7c273b46da629f27172dd9594e95a309e8"
	I0819 19:42:57.178488  470984 cri.go:89] found id: "8cde9d50116f223ef6493538f5ae20918c477628d4127a2ac98182f64446395b"
	I0819 19:42:57.178492  470984 cri.go:89] found id: "97b3ea7b8c2ce2841b0ee7f45eca09f3d59b14fbb72dd20976dd9581f6ecc942"
	I0819 19:42:57.178496  470984 cri.go:89] found id: "e83dd57bfe6d4f267dc1e80968e066a1107866708e1ed336dd94dde65dc97994"
	I0819 19:42:57.178499  470984 cri.go:89] found id: "24eff3c1f0c13300bdb7581b9cafdc8c7c9bea7e77c52fc6491f4e3055c7eea1"
	I0819 19:42:57.178501  470984 cri.go:89] found id: "0c3794f7593119cdb9a488c4223a840b2b1089549a68b2866400eac7c25f9ec4"
	I0819 19:42:57.178505  470984 cri.go:89] found id: ""
	I0819 19:42:57.178562  470984 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.643640502Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096825643615929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=982fd8a9-ed51-4b5a-bf2e-c4c4411c7847 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.644053559Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f6a8440-c850-4008-b729-673a76ed2d43 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.644181751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f6a8440-c850-4008-b729-673a76ed2d43 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.644528001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ce06179cd684646b66dbd324528e06932e8762fb7fb07e6495116aafe4a5e5a,PodSandboxId:c9ee51d13730c97f9ae2b716aabb29354754fc5e7229a23aa26fdf52e5558ef4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724096618567373859,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bzhsh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f64d272f-2758-46b2-91d7-bdd17f9eba4b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb22ddb238a00e49806fe2f3b3d3903d48456469a2171875dad0657f770e963,PodSandboxId:e2a1b8fee23ca1e8f926715fbb35a9aec79376b5c595b6efaf721bf7182ab83d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724096585124100451,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tjtx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad1a3fc-dc93-4bb3-812f-2987f9c128f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5526087b87c13c5800d983aaa9ee7dba1126cc9d45ae7a4da130df1907428b5,PodSandboxId:547e595b7b5c01f0e0def61a83c96ff3cf563c3f44dd035bf1ea22bdd26c88ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724096585099078943,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dghqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905fd606-2624-4e3b-8b82-d907f
49c1043,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b1bd21d08ea6058aa3244307ac730803d425e7942459534b2e2f1d8afcf7a5,PodSandboxId:8a448d625b7cefdfb96ba6e560c16df86fb29923713f3b3c62638806b24b995f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724096584941318887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edf76e0-7462-4c8a-9b0a-4081d076142b,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae957bd572857eca22ae7721f8b447ffa030532411fe7768a67d7e9fba77d34f,PodSandboxId:4a044dfbc16cc1b1a23a820838db146a599c773c5067243491256d58db88e7b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724096584892451234,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wwv5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442a87f5-67c8-423e-b61b-e3e397f12878,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946a6f1e520ab3866b797503ca3f72b7e262206f3595d0eafdb8f0bdc25d5845,PodSandboxId:3d6d7ac6cab8d586b365307ed6cbdd5b0f0a5b5504582f04cbccd6f3b1286f0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724096579998215591,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9570f8fb9abb17bf0a0b31add8ce4a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c9bcc4dcef7cee9bcb1ab6f982a11667ff9fb0c9c2cd7290e1c8fa52eac90e,PodSandboxId:caec45dd0098803294ca6fb9f81f93e9a59f18eacdf92503e16ae9ded54ed1fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724096580029318660,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349eec867f2a669372e6320a95834f7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9ef8b8013cdc17033d16419fd14ec8e27d80e9deedb824c413d93e9da4fb06,PodSandboxId:40c11ab920ab3b8b2e434a89638dc8418c8ed954966d4bc37005a4b5dcd47442,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724096580005690714,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d2825e4b75466632f8e142348a9ce49,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d520dc4f2fb7ed6e16b3b726559b7e9fdb6e4422b957094032e868df08609cc5,PodSandboxId:49530a6bcf60e0045013d3f7fde0245a60ecb75818b258ac01eb896c24e95115,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724096579992393873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 447bece717c013f58f18e8a53ade697f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9776d352552c1949c2e922c4b1dbccb4ef607d23ad40fa301057767159a8df7,PodSandboxId:fbb109b13ee5d14f80a7a4664d52d242bbef71efdf0dd882cc60050992ca4e8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724096264085479148,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bzhsh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f64d272f-2758-46b2-91d7-bdd17f9eba4b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88e348b3a84742ec396ed84b9260f8708967811698fae9424a4e5c6f73eb9ec,PodSandboxId:2470ac60e23979596614d3254f07aeb346b230a30af3cc3bc646c124c18ae9db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724096211013464242,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edf76e0-7462-4c8a-9b0a-4081d076142b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0adbbf88e8e6f5c8f99f70f167d0be23685b6efe52beb7c096a3d94e98e27772,PodSandboxId:362dc4a5650b90487f55397ab06653fb47fb8df2f0641963873dfbd1d525c55d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724096211010541482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tjtx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad1a3fc-dc93-4bb3-812f-2987f9c128f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3947c41c8021a850328a6ef431f4aa7c273b46da629f27172dd9594e95a309e8,PodSandboxId:34092c53295429c688907b804397c216002b82eeb57dbb2ecc72b6e1d7a69c00,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724096199493021910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dghqn,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 905fd606-2624-4e3b-8b82-d907f49c1043,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cde9d50116f223ef6493538f5ae20918c477628d4127a2ac98182f64446395b,PodSandboxId:b29008cd62ee35360342dcf8c299f9df4d281b643ef01a34d97363c53e10df40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724096196163904164,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wwv5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 442a87f5-67c8-423e-b61b-e3e397f12878,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e83dd57bfe6d4f267dc1e80968e066a1107866708e1ed336dd94dde65dc97994,PodSandboxId:87e7b3fa3f69e716a3941fc1358c45a3623db4d186825437b12fbd558d518f2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724096183675176161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d
2825e4b75466632f8e142348a9ce49,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3ea7b8c2ce2841b0ee7f45eca09f3d59b14fbb72dd20976dd9581f6ecc942,PodSandboxId:9887b7e5dfb51f294b8ad7b79570292afc80291232d468516e3aa0d8cfa174c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724096183677835635,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9570f8fb9abb17bf0a0b31add8ce4a6,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24eff3c1f0c13300bdb7581b9cafdc8c7c9bea7e77c52fc6491f4e3055c7eea1,PodSandboxId:688f4e7ce5e29c81c6cab2795297896410a965465b22c91ae0c5f4f711c4a43a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724096183619481989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 447bece717c013f58f18e8a53ade697f,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3794f7593119cdb9a488c4223a840b2b1089549a68b2866400eac7c25f9ec4,PodSandboxId:87ef372961342fafea467cb8be7006d850b5c92cfdf6baeccd5c98386a23d1e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724096183592835371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349eec867f2a669372e6320a95834f7,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f6a8440-c850-4008-b729-673a76ed2d43 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.695091752Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a3a2ff1-eaa1-46f5-ac65-d33d709d4738 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.695321736Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a3a2ff1-eaa1-46f5-ac65-d33d709d4738 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.696586553Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f3434ca0-4021-4da9-a619-1fffaf3f39da name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.697021549Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096825696998685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3434ca0-4021-4da9-a619-1fffaf3f39da name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.697695198Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad49c575-1d3a-4f48-8767-76505a2ecce1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.697771654Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad49c575-1d3a-4f48-8767-76505a2ecce1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.698140004Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ce06179cd684646b66dbd324528e06932e8762fb7fb07e6495116aafe4a5e5a,PodSandboxId:c9ee51d13730c97f9ae2b716aabb29354754fc5e7229a23aa26fdf52e5558ef4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724096618567373859,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bzhsh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f64d272f-2758-46b2-91d7-bdd17f9eba4b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb22ddb238a00e49806fe2f3b3d3903d48456469a2171875dad0657f770e963,PodSandboxId:e2a1b8fee23ca1e8f926715fbb35a9aec79376b5c595b6efaf721bf7182ab83d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724096585124100451,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tjtx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad1a3fc-dc93-4bb3-812f-2987f9c128f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5526087b87c13c5800d983aaa9ee7dba1126cc9d45ae7a4da130df1907428b5,PodSandboxId:547e595b7b5c01f0e0def61a83c96ff3cf563c3f44dd035bf1ea22bdd26c88ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724096585099078943,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dghqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905fd606-2624-4e3b-8b82-d907f
49c1043,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b1bd21d08ea6058aa3244307ac730803d425e7942459534b2e2f1d8afcf7a5,PodSandboxId:8a448d625b7cefdfb96ba6e560c16df86fb29923713f3b3c62638806b24b995f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724096584941318887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edf76e0-7462-4c8a-9b0a-4081d076142b,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae957bd572857eca22ae7721f8b447ffa030532411fe7768a67d7e9fba77d34f,PodSandboxId:4a044dfbc16cc1b1a23a820838db146a599c773c5067243491256d58db88e7b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724096584892451234,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wwv5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442a87f5-67c8-423e-b61b-e3e397f12878,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946a6f1e520ab3866b797503ca3f72b7e262206f3595d0eafdb8f0bdc25d5845,PodSandboxId:3d6d7ac6cab8d586b365307ed6cbdd5b0f0a5b5504582f04cbccd6f3b1286f0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724096579998215591,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9570f8fb9abb17bf0a0b31add8ce4a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c9bcc4dcef7cee9bcb1ab6f982a11667ff9fb0c9c2cd7290e1c8fa52eac90e,PodSandboxId:caec45dd0098803294ca6fb9f81f93e9a59f18eacdf92503e16ae9ded54ed1fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724096580029318660,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349eec867f2a669372e6320a95834f7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9ef8b8013cdc17033d16419fd14ec8e27d80e9deedb824c413d93e9da4fb06,PodSandboxId:40c11ab920ab3b8b2e434a89638dc8418c8ed954966d4bc37005a4b5dcd47442,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724096580005690714,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d2825e4b75466632f8e142348a9ce49,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d520dc4f2fb7ed6e16b3b726559b7e9fdb6e4422b957094032e868df08609cc5,PodSandboxId:49530a6bcf60e0045013d3f7fde0245a60ecb75818b258ac01eb896c24e95115,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724096579992393873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 447bece717c013f58f18e8a53ade697f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9776d352552c1949c2e922c4b1dbccb4ef607d23ad40fa301057767159a8df7,PodSandboxId:fbb109b13ee5d14f80a7a4664d52d242bbef71efdf0dd882cc60050992ca4e8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724096264085479148,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bzhsh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f64d272f-2758-46b2-91d7-bdd17f9eba4b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88e348b3a84742ec396ed84b9260f8708967811698fae9424a4e5c6f73eb9ec,PodSandboxId:2470ac60e23979596614d3254f07aeb346b230a30af3cc3bc646c124c18ae9db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724096211013464242,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edf76e0-7462-4c8a-9b0a-4081d076142b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0adbbf88e8e6f5c8f99f70f167d0be23685b6efe52beb7c096a3d94e98e27772,PodSandboxId:362dc4a5650b90487f55397ab06653fb47fb8df2f0641963873dfbd1d525c55d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724096211010541482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tjtx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad1a3fc-dc93-4bb3-812f-2987f9c128f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3947c41c8021a850328a6ef431f4aa7c273b46da629f27172dd9594e95a309e8,PodSandboxId:34092c53295429c688907b804397c216002b82eeb57dbb2ecc72b6e1d7a69c00,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724096199493021910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dghqn,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 905fd606-2624-4e3b-8b82-d907f49c1043,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cde9d50116f223ef6493538f5ae20918c477628d4127a2ac98182f64446395b,PodSandboxId:b29008cd62ee35360342dcf8c299f9df4d281b643ef01a34d97363c53e10df40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724096196163904164,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wwv5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 442a87f5-67c8-423e-b61b-e3e397f12878,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e83dd57bfe6d4f267dc1e80968e066a1107866708e1ed336dd94dde65dc97994,PodSandboxId:87e7b3fa3f69e716a3941fc1358c45a3623db4d186825437b12fbd558d518f2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724096183675176161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d
2825e4b75466632f8e142348a9ce49,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3ea7b8c2ce2841b0ee7f45eca09f3d59b14fbb72dd20976dd9581f6ecc942,PodSandboxId:9887b7e5dfb51f294b8ad7b79570292afc80291232d468516e3aa0d8cfa174c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724096183677835635,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9570f8fb9abb17bf0a0b31add8ce4a6,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24eff3c1f0c13300bdb7581b9cafdc8c7c9bea7e77c52fc6491f4e3055c7eea1,PodSandboxId:688f4e7ce5e29c81c6cab2795297896410a965465b22c91ae0c5f4f711c4a43a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724096183619481989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 447bece717c013f58f18e8a53ade697f,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3794f7593119cdb9a488c4223a840b2b1089549a68b2866400eac7c25f9ec4,PodSandboxId:87ef372961342fafea467cb8be7006d850b5c92cfdf6baeccd5c98386a23d1e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724096183592835371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349eec867f2a669372e6320a95834f7,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad49c575-1d3a-4f48-8767-76505a2ecce1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.738650264Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ceef30ff-f100-4c54-aa29-ad39eaebbc65 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.738733010Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ceef30ff-f100-4c54-aa29-ad39eaebbc65 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.739820836Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d703fbb5-b25b-4a75-b250-97b58ecc9fb4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.740718274Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096825740663067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d703fbb5-b25b-4a75-b250-97b58ecc9fb4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.741186712Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5e82353-5aaa-49e8-b60c-ed86a1311477 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.741239512Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5e82353-5aaa-49e8-b60c-ed86a1311477 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.743029434Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ce06179cd684646b66dbd324528e06932e8762fb7fb07e6495116aafe4a5e5a,PodSandboxId:c9ee51d13730c97f9ae2b716aabb29354754fc5e7229a23aa26fdf52e5558ef4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724096618567373859,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bzhsh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f64d272f-2758-46b2-91d7-bdd17f9eba4b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb22ddb238a00e49806fe2f3b3d3903d48456469a2171875dad0657f770e963,PodSandboxId:e2a1b8fee23ca1e8f926715fbb35a9aec79376b5c595b6efaf721bf7182ab83d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724096585124100451,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tjtx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad1a3fc-dc93-4bb3-812f-2987f9c128f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5526087b87c13c5800d983aaa9ee7dba1126cc9d45ae7a4da130df1907428b5,PodSandboxId:547e595b7b5c01f0e0def61a83c96ff3cf563c3f44dd035bf1ea22bdd26c88ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724096585099078943,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dghqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905fd606-2624-4e3b-8b82-d907f
49c1043,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b1bd21d08ea6058aa3244307ac730803d425e7942459534b2e2f1d8afcf7a5,PodSandboxId:8a448d625b7cefdfb96ba6e560c16df86fb29923713f3b3c62638806b24b995f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724096584941318887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edf76e0-7462-4c8a-9b0a-4081d076142b,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae957bd572857eca22ae7721f8b447ffa030532411fe7768a67d7e9fba77d34f,PodSandboxId:4a044dfbc16cc1b1a23a820838db146a599c773c5067243491256d58db88e7b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724096584892451234,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wwv5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442a87f5-67c8-423e-b61b-e3e397f12878,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946a6f1e520ab3866b797503ca3f72b7e262206f3595d0eafdb8f0bdc25d5845,PodSandboxId:3d6d7ac6cab8d586b365307ed6cbdd5b0f0a5b5504582f04cbccd6f3b1286f0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724096579998215591,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9570f8fb9abb17bf0a0b31add8ce4a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c9bcc4dcef7cee9bcb1ab6f982a11667ff9fb0c9c2cd7290e1c8fa52eac90e,PodSandboxId:caec45dd0098803294ca6fb9f81f93e9a59f18eacdf92503e16ae9ded54ed1fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724096580029318660,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349eec867f2a669372e6320a95834f7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9ef8b8013cdc17033d16419fd14ec8e27d80e9deedb824c413d93e9da4fb06,PodSandboxId:40c11ab920ab3b8b2e434a89638dc8418c8ed954966d4bc37005a4b5dcd47442,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724096580005690714,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d2825e4b75466632f8e142348a9ce49,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d520dc4f2fb7ed6e16b3b726559b7e9fdb6e4422b957094032e868df08609cc5,PodSandboxId:49530a6bcf60e0045013d3f7fde0245a60ecb75818b258ac01eb896c24e95115,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724096579992393873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 447bece717c013f58f18e8a53ade697f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9776d352552c1949c2e922c4b1dbccb4ef607d23ad40fa301057767159a8df7,PodSandboxId:fbb109b13ee5d14f80a7a4664d52d242bbef71efdf0dd882cc60050992ca4e8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724096264085479148,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bzhsh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f64d272f-2758-46b2-91d7-bdd17f9eba4b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88e348b3a84742ec396ed84b9260f8708967811698fae9424a4e5c6f73eb9ec,PodSandboxId:2470ac60e23979596614d3254f07aeb346b230a30af3cc3bc646c124c18ae9db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724096211013464242,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edf76e0-7462-4c8a-9b0a-4081d076142b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0adbbf88e8e6f5c8f99f70f167d0be23685b6efe52beb7c096a3d94e98e27772,PodSandboxId:362dc4a5650b90487f55397ab06653fb47fb8df2f0641963873dfbd1d525c55d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724096211010541482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tjtx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad1a3fc-dc93-4bb3-812f-2987f9c128f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3947c41c8021a850328a6ef431f4aa7c273b46da629f27172dd9594e95a309e8,PodSandboxId:34092c53295429c688907b804397c216002b82eeb57dbb2ecc72b6e1d7a69c00,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724096199493021910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dghqn,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 905fd606-2624-4e3b-8b82-d907f49c1043,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cde9d50116f223ef6493538f5ae20918c477628d4127a2ac98182f64446395b,PodSandboxId:b29008cd62ee35360342dcf8c299f9df4d281b643ef01a34d97363c53e10df40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724096196163904164,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wwv5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 442a87f5-67c8-423e-b61b-e3e397f12878,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e83dd57bfe6d4f267dc1e80968e066a1107866708e1ed336dd94dde65dc97994,PodSandboxId:87e7b3fa3f69e716a3941fc1358c45a3623db4d186825437b12fbd558d518f2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724096183675176161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d
2825e4b75466632f8e142348a9ce49,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3ea7b8c2ce2841b0ee7f45eca09f3d59b14fbb72dd20976dd9581f6ecc942,PodSandboxId:9887b7e5dfb51f294b8ad7b79570292afc80291232d468516e3aa0d8cfa174c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724096183677835635,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9570f8fb9abb17bf0a0b31add8ce4a6,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24eff3c1f0c13300bdb7581b9cafdc8c7c9bea7e77c52fc6491f4e3055c7eea1,PodSandboxId:688f4e7ce5e29c81c6cab2795297896410a965465b22c91ae0c5f4f711c4a43a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724096183619481989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 447bece717c013f58f18e8a53ade697f,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3794f7593119cdb9a488c4223a840b2b1089549a68b2866400eac7c25f9ec4,PodSandboxId:87ef372961342fafea467cb8be7006d850b5c92cfdf6baeccd5c98386a23d1e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724096183592835371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349eec867f2a669372e6320a95834f7,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a5e82353-5aaa-49e8-b60c-ed86a1311477 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.794396036Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c3f8bb4-284e-42d0-8c08-4bd9aff7a87d name=/runtime.v1.RuntimeService/Version
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.794487714Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c3f8bb4-284e-42d0-8c08-4bd9aff7a87d name=/runtime.v1.RuntimeService/Version
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.795754643Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e58f7f3-77e4-4e0a-8bb3-04076f748086 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.796535189Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096825796506505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e58f7f3-77e4-4e0a-8bb3-04076f748086 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.797388283Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ac5b2af-1b3a-4577-b305-49a35723b649 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.797628004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ac5b2af-1b3a-4577-b305-49a35723b649 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:47:05 multinode-548379 crio[2728]: time="2024-08-19 19:47:05.798299558Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ce06179cd684646b66dbd324528e06932e8762fb7fb07e6495116aafe4a5e5a,PodSandboxId:c9ee51d13730c97f9ae2b716aabb29354754fc5e7229a23aa26fdf52e5558ef4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724096618567373859,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bzhsh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f64d272f-2758-46b2-91d7-bdd17f9eba4b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb22ddb238a00e49806fe2f3b3d3903d48456469a2171875dad0657f770e963,PodSandboxId:e2a1b8fee23ca1e8f926715fbb35a9aec79376b5c595b6efaf721bf7182ab83d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724096585124100451,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tjtx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad1a3fc-dc93-4bb3-812f-2987f9c128f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5526087b87c13c5800d983aaa9ee7dba1126cc9d45ae7a4da130df1907428b5,PodSandboxId:547e595b7b5c01f0e0def61a83c96ff3cf563c3f44dd035bf1ea22bdd26c88ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724096585099078943,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dghqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905fd606-2624-4e3b-8b82-d907f
49c1043,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b1bd21d08ea6058aa3244307ac730803d425e7942459534b2e2f1d8afcf7a5,PodSandboxId:8a448d625b7cefdfb96ba6e560c16df86fb29923713f3b3c62638806b24b995f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724096584941318887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edf76e0-7462-4c8a-9b0a-4081d076142b,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae957bd572857eca22ae7721f8b447ffa030532411fe7768a67d7e9fba77d34f,PodSandboxId:4a044dfbc16cc1b1a23a820838db146a599c773c5067243491256d58db88e7b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724096584892451234,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wwv5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442a87f5-67c8-423e-b61b-e3e397f12878,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946a6f1e520ab3866b797503ca3f72b7e262206f3595d0eafdb8f0bdc25d5845,PodSandboxId:3d6d7ac6cab8d586b365307ed6cbdd5b0f0a5b5504582f04cbccd6f3b1286f0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724096579998215591,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9570f8fb9abb17bf0a0b31add8ce4a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c9bcc4dcef7cee9bcb1ab6f982a11667ff9fb0c9c2cd7290e1c8fa52eac90e,PodSandboxId:caec45dd0098803294ca6fb9f81f93e9a59f18eacdf92503e16ae9ded54ed1fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724096580029318660,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349eec867f2a669372e6320a95834f7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9ef8b8013cdc17033d16419fd14ec8e27d80e9deedb824c413d93e9da4fb06,PodSandboxId:40c11ab920ab3b8b2e434a89638dc8418c8ed954966d4bc37005a4b5dcd47442,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724096580005690714,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d2825e4b75466632f8e142348a9ce49,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d520dc4f2fb7ed6e16b3b726559b7e9fdb6e4422b957094032e868df08609cc5,PodSandboxId:49530a6bcf60e0045013d3f7fde0245a60ecb75818b258ac01eb896c24e95115,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724096579992393873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 447bece717c013f58f18e8a53ade697f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9776d352552c1949c2e922c4b1dbccb4ef607d23ad40fa301057767159a8df7,PodSandboxId:fbb109b13ee5d14f80a7a4664d52d242bbef71efdf0dd882cc60050992ca4e8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724096264085479148,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bzhsh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f64d272f-2758-46b2-91d7-bdd17f9eba4b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88e348b3a84742ec396ed84b9260f8708967811698fae9424a4e5c6f73eb9ec,PodSandboxId:2470ac60e23979596614d3254f07aeb346b230a30af3cc3bc646c124c18ae9db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724096211013464242,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edf76e0-7462-4c8a-9b0a-4081d076142b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0adbbf88e8e6f5c8f99f70f167d0be23685b6efe52beb7c096a3d94e98e27772,PodSandboxId:362dc4a5650b90487f55397ab06653fb47fb8df2f0641963873dfbd1d525c55d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724096211010541482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tjtx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad1a3fc-dc93-4bb3-812f-2987f9c128f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3947c41c8021a850328a6ef431f4aa7c273b46da629f27172dd9594e95a309e8,PodSandboxId:34092c53295429c688907b804397c216002b82eeb57dbb2ecc72b6e1d7a69c00,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724096199493021910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dghqn,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 905fd606-2624-4e3b-8b82-d907f49c1043,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cde9d50116f223ef6493538f5ae20918c477628d4127a2ac98182f64446395b,PodSandboxId:b29008cd62ee35360342dcf8c299f9df4d281b643ef01a34d97363c53e10df40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724096196163904164,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wwv5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 442a87f5-67c8-423e-b61b-e3e397f12878,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e83dd57bfe6d4f267dc1e80968e066a1107866708e1ed336dd94dde65dc97994,PodSandboxId:87e7b3fa3f69e716a3941fc1358c45a3623db4d186825437b12fbd558d518f2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724096183675176161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d
2825e4b75466632f8e142348a9ce49,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3ea7b8c2ce2841b0ee7f45eca09f3d59b14fbb72dd20976dd9581f6ecc942,PodSandboxId:9887b7e5dfb51f294b8ad7b79570292afc80291232d468516e3aa0d8cfa174c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724096183677835635,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9570f8fb9abb17bf0a0b31add8ce4a6,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24eff3c1f0c13300bdb7581b9cafdc8c7c9bea7e77c52fc6491f4e3055c7eea1,PodSandboxId:688f4e7ce5e29c81c6cab2795297896410a965465b22c91ae0c5f4f711c4a43a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724096183619481989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 447bece717c013f58f18e8a53ade697f,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3794f7593119cdb9a488c4223a840b2b1089549a68b2866400eac7c25f9ec4,PodSandboxId:87ef372961342fafea467cb8be7006d850b5c92cfdf6baeccd5c98386a23d1e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724096183592835371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-548379,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349eec867f2a669372e6320a95834f7,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ac5b2af-1b3a-4577-b305-49a35723b649 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7ce06179cd684       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   c9ee51d13730c       busybox-7dff88458-bzhsh
	eeb22ddb238a0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   e2a1b8fee23ca       coredns-6f6b679f8f-tjtx5
	e5526087b87c1       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   547e595b7b5c0       kindnet-dghqn
	66b1bd21d08ea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   8a448d625b7ce       storage-provisioner
	ae957bd572857       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   4a044dfbc16cc       kube-proxy-wwv5c
	15c9bcc4dcef7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   caec45dd00988       kube-apiserver-multinode-548379
	9d9ef8b8013cd       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   40c11ab920ab3       kube-scheduler-multinode-548379
	946a6f1e520ab       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   3d6d7ac6cab8d       etcd-multinode-548379
	d520dc4f2fb7e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   49530a6bcf60e       kube-controller-manager-multinode-548379
	e9776d352552c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   fbb109b13ee5d       busybox-7dff88458-bzhsh
	f88e348b3a847       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   2470ac60e2397       storage-provisioner
	0adbbf88e8e6f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   362dc4a5650b9       coredns-6f6b679f8f-tjtx5
	3947c41c8021a       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   34092c5329542       kindnet-dghqn
	8cde9d50116f2       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   b29008cd62ee3       kube-proxy-wwv5c
	97b3ea7b8c2ce       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   9887b7e5dfb51       etcd-multinode-548379
	e83dd57bfe6d4       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   87e7b3fa3f69e       kube-scheduler-multinode-548379
	24eff3c1f0c13       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   0                   688f4e7ce5e29       kube-controller-manager-multinode-548379
	0c3794f759311       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            0                   87ef372961342       kube-apiserver-multinode-548379
	
	
	==> coredns [0adbbf88e8e6f5c8f99f70f167d0be23685b6efe52beb7c096a3d94e98e27772] <==
	[INFO] 10.244.1.2:53626 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002159167s
	[INFO] 10.244.1.2:56073 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088712s
	[INFO] 10.244.1.2:59545 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067922s
	[INFO] 10.244.1.2:42139 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001637555s
	[INFO] 10.244.1.2:34970 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064038s
	[INFO] 10.244.1.2:39551 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000141993s
	[INFO] 10.244.1.2:51608 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073165s
	[INFO] 10.244.0.3:50563 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000073873s
	[INFO] 10.244.0.3:52236 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000040179s
	[INFO] 10.244.0.3:58501 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000035598s
	[INFO] 10.244.0.3:43640 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000027295s
	[INFO] 10.244.1.2:49058 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169808s
	[INFO] 10.244.1.2:43243 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090393s
	[INFO] 10.244.1.2:52058 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097467s
	[INFO] 10.244.1.2:55033 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080488s
	[INFO] 10.244.0.3:43467 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112671s
	[INFO] 10.244.0.3:52651 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000099421s
	[INFO] 10.244.0.3:44060 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079557s
	[INFO] 10.244.0.3:38985 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111723s
	[INFO] 10.244.1.2:55909 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121602s
	[INFO] 10.244.1.2:34912 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116408s
	[INFO] 10.244.1.2:37941 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000073106s
	[INFO] 10.244.1.2:40067 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082935s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [eeb22ddb238a00e49806fe2f3b3d3903d48456469a2171875dad0657f770e963] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38906 - 19609 "HINFO IN 8183877632649629386.8280303456427227633. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021951872s
	
	
	==> describe nodes <==
	Name:               multinode-548379
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-548379
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=multinode-548379
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T19_36_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:36:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-548379
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:46:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:43:02 +0000   Mon, 19 Aug 2024 19:36:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:43:02 +0000   Mon, 19 Aug 2024 19:36:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:43:02 +0000   Mon, 19 Aug 2024 19:36:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:43:02 +0000   Mon, 19 Aug 2024 19:36:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.35
	  Hostname:    multinode-548379
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9159c9db44cd4d7da4cdf638769b739e
	  System UUID:                9159c9db-44cd-4d7d-a4cd-f638769b739e
	  Boot ID:                    b72926b1-7c78-4bb8-8dd8-4c1656ba65cc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bzhsh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m26s
	  kube-system                 coredns-6f6b679f8f-tjtx5                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-548379                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-dghqn                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-548379             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-548379    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-wwv5c                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-548379             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)    kubelet          Node multinode-548379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)    kubelet          Node multinode-548379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)    kubelet          Node multinode-548379 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-548379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-548379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-548379 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-548379 event: Registered Node multinode-548379 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-548379 status is now: NodeReady
	  Normal  Starting                 4m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m7s (x8 over 4m7s)  kubelet          Node multinode-548379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x8 over 4m7s)  kubelet          Node multinode-548379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x7 over 4m7s)  kubelet          Node multinode-548379 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                   node-controller  Node multinode-548379 event: Registered Node multinode-548379 in Controller
	
	
	Name:               multinode-548379-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-548379-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=multinode-548379
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T19_43_44_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:43:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-548379-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:44:45 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 19:44:14 +0000   Mon, 19 Aug 2024 19:45:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 19:44:14 +0000   Mon, 19 Aug 2024 19:45:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 19:44:14 +0000   Mon, 19 Aug 2024 19:45:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 19:44:14 +0000   Mon, 19 Aug 2024 19:45:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.133
	  Hostname:    multinode-548379-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9eca4535221b4296ac8c5a4d710f7f12
	  System UUID:                9eca4535-221b-4296-ac8c-5a4d710f7f12
	  Boot ID:                    ac1cf6cd-7037-4df9-a152-b5f965c742f3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-df4jm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m27s
	  kube-system                 kindnet-pwhrw              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m48s
	  kube-system                 kube-proxy-knvbd           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m18s                  kube-proxy       
	  Normal  Starting                 9m42s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m48s (x2 over 9m48s)  kubelet          Node multinode-548379-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m48s (x2 over 9m48s)  kubelet          Node multinode-548379-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m48s (x2 over 9m48s)  kubelet          Node multinode-548379-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m28s                  kubelet          Node multinode-548379-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  3m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m22s (x2 over 3m23s)  kubelet          Node multinode-548379-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m22s (x2 over 3m23s)  kubelet          Node multinode-548379-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m22s (x2 over 3m23s)  kubelet          Node multinode-548379-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m4s                   kubelet          Node multinode-548379-m02 status is now: NodeReady
	  Normal  NodeNotReady             100s                   node-controller  Node multinode-548379-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.059633] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.178731] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.155996] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.282527] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +4.067585] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +3.753020] systemd-fstab-generator[891]: Ignoring "noauto" option for root device
	[  +0.065275] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.988151] systemd-fstab-generator[1225]: Ignoring "noauto" option for root device
	[  +0.076971] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.621058] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.532343] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.649446] kauditd_printk_skb: 32 callbacks suppressed
	[Aug19 19:37] kauditd_printk_skb: 14 callbacks suppressed
	[Aug19 19:42] systemd-fstab-generator[2648]: Ignoring "noauto" option for root device
	[  +0.185713] systemd-fstab-generator[2660]: Ignoring "noauto" option for root device
	[  +0.192391] systemd-fstab-generator[2674]: Ignoring "noauto" option for root device
	[  +0.146597] systemd-fstab-generator[2686]: Ignoring "noauto" option for root device
	[  +0.282321] systemd-fstab-generator[2714]: Ignoring "noauto" option for root device
	[  +0.729315] systemd-fstab-generator[2813]: Ignoring "noauto" option for root device
	[  +2.526523] systemd-fstab-generator[2935]: Ignoring "noauto" option for root device
	[  +1.059965] kauditd_printk_skb: 179 callbacks suppressed
	[Aug19 19:43] kauditd_printk_skb: 25 callbacks suppressed
	[ +14.480559] systemd-fstab-generator[3792]: Ignoring "noauto" option for root device
	[  +0.089734] kauditd_printk_skb: 6 callbacks suppressed
	[ +18.780510] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [946a6f1e520ab3866b797503ca3f72b7e262206f3595d0eafdb8f0bdc25d5845] <==
	{"level":"info","ts":"2024-08-19T19:43:00.429574Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"45f5838de4bd43f1","local-member-id":"732232f81d76e930","added-peer-id":"732232f81d76e930","added-peer-peer-urls":["https://192.168.39.35:2380"]}
	{"level":"info","ts":"2024-08-19T19:43:00.429823Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"45f5838de4bd43f1","local-member-id":"732232f81d76e930","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:43:00.429879Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:43:00.442547Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:43:00.447433Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T19:43:00.447654Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"732232f81d76e930","initial-advertise-peer-urls":["https://192.168.39.35:2380"],"listen-peer-urls":["https://192.168.39.35:2380"],"advertise-client-urls":["https://192.168.39.35:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.35:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T19:43:00.447710Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T19:43:00.447829Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.35:2380"}
	{"level":"info","ts":"2024-08-19T19:43:00.447852Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.35:2380"}
	{"level":"info","ts":"2024-08-19T19:43:01.373204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T19:43:01.373320Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T19:43:01.373375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 received MsgPreVoteResp from 732232f81d76e930 at term 2"}
	{"level":"info","ts":"2024-08-19T19:43:01.373418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T19:43:01.373443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 received MsgVoteResp from 732232f81d76e930 at term 3"}
	{"level":"info","ts":"2024-08-19T19:43:01.373472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 became leader at term 3"}
	{"level":"info","ts":"2024-08-19T19:43:01.373497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 732232f81d76e930 elected leader 732232f81d76e930 at term 3"}
	{"level":"info","ts":"2024-08-19T19:43:01.381299Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"732232f81d76e930","local-member-attributes":"{Name:multinode-548379 ClientURLs:[https://192.168.39.35:2379]}","request-path":"/0/members/732232f81d76e930/attributes","cluster-id":"45f5838de4bd43f1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T19:43:01.381465Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:43:01.381763Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:43:01.382421Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:43:01.383208Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.35:2379"}
	{"level":"info","ts":"2024-08-19T19:43:01.383688Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:43:01.384437Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T19:43:01.384507Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T19:43:01.384532Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [97b3ea7b8c2ce2841b0ee7f45eca09f3d59b14fbb72dd20976dd9581f6ecc942] <==
	{"level":"info","ts":"2024-08-19T19:37:18.751923Z","caller":"traceutil/trace.go:171","msg":"trace[1691852702] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"233.544364ms","start":"2024-08-19T19:37:18.518365Z","end":"2024-08-19T19:37:18.751909Z","steps":["trace[1691852702] 'process raft request'  (duration: 116.141684ms)","trace[1691852702] 'compare'  (duration: 116.660454ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T19:38:11.834338Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.118202ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16803090103365747120 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-548379-m03.17ed38714b386237\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-548379-m03.17ed38714b386237\" value_size:642 lease:7579718066510970918 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-19T19:38:11.834710Z","caller":"traceutil/trace.go:171","msg":"trace[1198149453] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"226.831835ms","start":"2024-08-19T19:38:11.607848Z","end":"2024-08-19T19:38:11.834679Z","steps":["trace[1198149453] 'process raft request'  (duration: 126.3313ms)","trace[1198149453] 'compare'  (duration: 100.0226ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T19:38:18.899806Z","caller":"traceutil/trace.go:171","msg":"trace[1678700882] transaction","detail":"{read_only:false; response_revision:650; number_of_response:1; }","duration":"213.310315ms","start":"2024-08-19T19:38:18.686483Z","end":"2024-08-19T19:38:18.899793Z","steps":["trace[1678700882] 'process raft request'  (duration: 213.219935ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T19:38:21.798632Z","caller":"traceutil/trace.go:171","msg":"trace[134708294] linearizableReadLoop","detail":"{readStateIndex:693; appliedIndex:692; }","duration":"144.39406ms","start":"2024-08-19T19:38:21.654209Z","end":"2024-08-19T19:38:21.798603Z","steps":["trace[134708294] 'read index received'  (duration: 144.256156ms)","trace[134708294] 'applied index is now lower than readState.Index'  (duration: 137.383µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T19:38:21.798782Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.550832ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-548379-m03\" ","response":"range_response_count:1 size:2887"}
	{"level":"info","ts":"2024-08-19T19:38:21.798859Z","caller":"traceutil/trace.go:171","msg":"trace[402560877] range","detail":"{range_begin:/registry/minions/multinode-548379-m03; range_end:; response_count:1; response_revision:659; }","duration":"144.638202ms","start":"2024-08-19T19:38:21.654205Z","end":"2024-08-19T19:38:21.798843Z","steps":["trace[402560877] 'agreement among raft nodes before linearized reading'  (duration: 144.488224ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T19:38:21.799039Z","caller":"traceutil/trace.go:171","msg":"trace[2090916661] transaction","detail":"{read_only:false; response_revision:659; number_of_response:1; }","duration":"150.75469ms","start":"2024-08-19T19:38:21.648271Z","end":"2024-08-19T19:38:21.799026Z","steps":["trace[2090916661] 'process raft request'  (duration: 150.212825ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:38:22.288052Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.31546ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16803090103365747261 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/multinode-548379-m03\" mod_revision:637 > success:<request_put:<key:\"/registry/minions/multinode-548379-m03\" value_size:3127 >> failure:<request_range:<key:\"/registry/minions/multinode-548379-m03\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-19T19:38:22.288208Z","caller":"traceutil/trace.go:171","msg":"trace[16616940] linearizableReadLoop","detail":"{readStateIndex:695; appliedIndex:694; }","duration":"194.870417ms","start":"2024-08-19T19:38:22.093326Z","end":"2024-08-19T19:38:22.288196Z","steps":["trace[16616940] 'read index received'  (duration: 60.029687ms)","trace[16616940] 'applied index is now lower than readState.Index'  (duration: 134.839368ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T19:38:22.288284Z","caller":"traceutil/trace.go:171","msg":"trace[882393299] transaction","detail":"{read_only:false; response_revision:661; number_of_response:1; }","duration":"297.358467ms","start":"2024-08-19T19:38:21.990917Z","end":"2024-08-19T19:38:22.288276Z","steps":["trace[882393299] 'process raft request'  (duration: 162.524211ms)","trace[882393299] 'compare'  (duration: 134.222847ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T19:38:22.288509Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.170545ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-19T19:38:22.288550Z","caller":"traceutil/trace.go:171","msg":"trace[1894142840] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:661; }","duration":"195.223443ms","start":"2024-08-19T19:38:22.093320Z","end":"2024-08-19T19:38:22.288544Z","steps":["trace[1894142840] 'agreement among raft nodes before linearized reading'  (duration: 195.113908ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:38:22.288664Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.403836ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-548379-m03\" ","response":"range_response_count:1 size:3188"}
	{"level":"info","ts":"2024-08-19T19:38:22.288694Z","caller":"traceutil/trace.go:171","msg":"trace[104543842] range","detail":"{range_begin:/registry/minions/multinode-548379-m03; range_end:; response_count:1; response_revision:661; }","duration":"134.435073ms","start":"2024-08-19T19:38:22.154254Z","end":"2024-08-19T19:38:22.288690Z","steps":["trace[104543842] 'agreement among raft nodes before linearized reading'  (duration: 134.387233ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T19:41:23.592020Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-19T19:41:23.592083Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-548379","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.35:2380"],"advertise-client-urls":["https://192.168.39.35:2379"]}
	{"level":"warn","ts":"2024-08-19T19:41:23.592197Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T19:41:23.592285Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T19:41:23.633312Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.35:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T19:41:23.633426Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.35:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T19:41:23.633598Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"732232f81d76e930","current-leader-member-id":"732232f81d76e930"}
	{"level":"info","ts":"2024-08-19T19:41:23.636455Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.35:2380"}
	{"level":"info","ts":"2024-08-19T19:41:23.636624Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.35:2380"}
	{"level":"info","ts":"2024-08-19T19:41:23.636677Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-548379","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.35:2380"],"advertise-client-urls":["https://192.168.39.35:2379"]}
	
	
	==> kernel <==
	 19:47:06 up 11 min,  0 users,  load average: 0.03, 0.11, 0.08
	Linux multinode-548379 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3947c41c8021a850328a6ef431f4aa7c273b46da629f27172dd9594e95a309e8] <==
	I0819 19:40:40.350168       1 main.go:322] Node multinode-548379-m03 has CIDR [10.244.3.0/24] 
	I0819 19:40:50.350444       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0819 19:40:50.350570       1 main.go:299] handling current node
	I0819 19:40:50.350612       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0819 19:40:50.350638       1 main.go:322] Node multinode-548379-m02 has CIDR [10.244.1.0/24] 
	I0819 19:40:50.350863       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0819 19:40:50.350904       1 main.go:322] Node multinode-548379-m03 has CIDR [10.244.3.0/24] 
	I0819 19:41:00.353807       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0819 19:41:00.353841       1 main.go:299] handling current node
	I0819 19:41:00.353858       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0819 19:41:00.353862       1 main.go:322] Node multinode-548379-m02 has CIDR [10.244.1.0/24] 
	I0819 19:41:00.353983       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0819 19:41:00.354006       1 main.go:322] Node multinode-548379-m03 has CIDR [10.244.3.0/24] 
	I0819 19:41:10.358926       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0819 19:41:10.359027       1 main.go:299] handling current node
	I0819 19:41:10.359056       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0819 19:41:10.359074       1 main.go:322] Node multinode-548379-m02 has CIDR [10.244.1.0/24] 
	I0819 19:41:10.359262       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0819 19:41:10.359290       1 main.go:322] Node multinode-548379-m03 has CIDR [10.244.3.0/24] 
	I0819 19:41:20.357485       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0819 19:41:20.357582       1 main.go:299] handling current node
	I0819 19:41:20.357612       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0819 19:41:20.357629       1 main.go:322] Node multinode-548379-m02 has CIDR [10.244.1.0/24] 
	I0819 19:41:20.357772       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0819 19:41:20.357807       1 main.go:322] Node multinode-548379-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [e5526087b87c13c5800d983aaa9ee7dba1126cc9d45ae7a4da130df1907428b5] <==
	I0819 19:46:05.848729       1 main.go:322] Node multinode-548379-m02 has CIDR [10.244.1.0/24] 
	I0819 19:46:15.852920       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0819 19:46:15.853029       1 main.go:299] handling current node
	I0819 19:46:15.853058       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0819 19:46:15.853077       1 main.go:322] Node multinode-548379-m02 has CIDR [10.244.1.0/24] 
	I0819 19:46:25.856755       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0819 19:46:25.856810       1 main.go:299] handling current node
	I0819 19:46:25.856829       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0819 19:46:25.856837       1 main.go:322] Node multinode-548379-m02 has CIDR [10.244.1.0/24] 
	I0819 19:46:35.851916       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0819 19:46:35.852026       1 main.go:299] handling current node
	I0819 19:46:35.852056       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0819 19:46:35.852075       1 main.go:322] Node multinode-548379-m02 has CIDR [10.244.1.0/24] 
	I0819 19:46:45.852640       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0819 19:46:45.852777       1 main.go:299] handling current node
	I0819 19:46:45.852811       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0819 19:46:45.852830       1 main.go:322] Node multinode-548379-m02 has CIDR [10.244.1.0/24] 
	I0819 19:46:55.857842       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0819 19:46:55.857877       1 main.go:299] handling current node
	I0819 19:46:55.857894       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0819 19:46:55.857898       1 main.go:322] Node multinode-548379-m02 has CIDR [10.244.1.0/24] 
	I0819 19:47:05.849933       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0819 19:47:05.849991       1 main.go:299] handling current node
	I0819 19:47:05.850006       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0819 19:47:05.850012       1 main.go:322] Node multinode-548379-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0c3794f7593119cdb9a488c4223a840b2b1089549a68b2866400eac7c25f9ec4] <==
	W0819 19:41:23.610683       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.610716       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.610747       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.610801       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.610826       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.610860       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.610892       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.610925       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.610951       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.611009       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.611037       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.611063       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.616604       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.617639       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.617776       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.617841       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.617883       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.617920       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.617936       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.617954       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.617990       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.618030       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.618067       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.618103       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:41:23.621308       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [15c9bcc4dcef7cee9bcb1ab6f982a11667ff9fb0c9c2cd7290e1c8fa52eac90e] <==
	I0819 19:43:02.738384       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 19:43:02.738697       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 19:43:02.740618       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 19:43:02.741011       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 19:43:02.741054       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 19:43:02.755663       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 19:43:02.761380       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 19:43:02.761457       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 19:43:02.762349       1 aggregator.go:171] initial CRD sync complete...
	I0819 19:43:02.762379       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 19:43:02.762386       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 19:43:02.762391       1 cache.go:39] Caches are synced for autoregister controller
	I0819 19:43:02.762542       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 19:43:02.765408       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 19:43:02.765448       1 policy_source.go:224] refreshing policies
	I0819 19:43:02.771648       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0819 19:43:02.803783       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0819 19:43:03.640262       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 19:43:04.355761       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 19:43:04.675891       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 19:43:04.700891       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 19:43:04.893200       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 19:43:04.935784       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 19:43:06.411476       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 19:43:06.460320       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [24eff3c1f0c13300bdb7581b9cafdc8c7c9bea7e77c52fc6491f4e3055c7eea1] <==
	I0819 19:38:59.997414       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-548379-m02"
	I0819 19:39:00.023324       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-548379-m03" podCIDRs=["10.244.3.0/24"]
	I0819 19:39:00.023459       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	E0819 19:39:00.038316       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-548379-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-548379-m03" podCIDRs=["10.244.4.0/24"]
	E0819 19:39:00.038515       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-548379-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-548379-m03"
	E0819 19:39:00.038876       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-548379-m03': failed to patch node CIDR: Node \"multinode-548379-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0819 19:39:00.039046       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:39:00.045005       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:39:00.236992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:39:00.581254       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:39:03.263683       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:39:10.075580       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:39:18.019632       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-548379-m02"
	I0819 19:39:18.019702       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:39:18.027950       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:39:18.170037       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:40:03.187918       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:40:03.188253       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-548379-m02"
	I0819 19:40:03.194472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m02"
	I0819 19:40:03.217088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:40:03.226030       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m02"
	I0819 19:40:03.260699       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.128656ms"
	I0819 19:40:03.261082       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="87.887µs"
	I0819 19:40:08.371745       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:40:18.468675       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m02"
	
	
	==> kube-controller-manager [d520dc4f2fb7ed6e16b3b726559b7e9fdb6e4422b957094032e868df08609cc5] <==
	E0819 19:44:21.349872       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-548379-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-548379-m03"
	E0819 19:44:21.349915       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-548379-m03': failed to patch node CIDR: Node \"multinode-548379-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0819 19:44:21.349942       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:21.355620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:21.364960       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:21.712964       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:26.196770       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:31.532904       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:39.192507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:39.192675       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-548379-m02"
	I0819 19:44:39.204225       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:41.104256       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:43.908331       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:43.926563       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:44.390332       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m03"
	I0819 19:44:44.390535       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-548379-m02"
	I0819 19:45:26.066700       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-tq6d4"
	I0819 19:45:26.097564       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-tq6d4"
	I0819 19:45:26.098181       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-tlpv4"
	I0819 19:45:26.122946       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m02"
	I0819 19:45:26.149570       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m02"
	I0819 19:45:26.154409       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-tlpv4"
	I0819 19:45:26.185024       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.539719ms"
	I0819 19:45:26.185225       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.136µs"
	I0819 19:45:31.218344       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-548379-m02"
	
	
	==> kube-proxy [8cde9d50116f223ef6493538f5ae20918c477628d4127a2ac98182f64446395b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 19:36:36.324419       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 19:36:36.333056       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.35"]
	E0819 19:36:36.333161       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 19:36:36.387601       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 19:36:36.387634       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 19:36:36.387665       1 server_linux.go:169] "Using iptables Proxier"
	I0819 19:36:36.390225       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 19:36:36.390524       1 server.go:483] "Version info" version="v1.31.0"
	I0819 19:36:36.390535       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:36:36.391730       1 config.go:197] "Starting service config controller"
	I0819 19:36:36.391750       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 19:36:36.391781       1 config.go:104] "Starting endpoint slice config controller"
	I0819 19:36:36.391786       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 19:36:36.392478       1 config.go:326] "Starting node config controller"
	I0819 19:36:36.392526       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 19:36:36.492837       1 shared_informer.go:320] Caches are synced for node config
	I0819 19:36:36.492870       1 shared_informer.go:320] Caches are synced for service config
	I0819 19:36:36.492891       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ae957bd572857eca22ae7721f8b447ffa030532411fe7768a67d7e9fba77d34f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 19:43:05.282342       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 19:43:05.299548       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.35"]
	E0819 19:43:05.299860       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 19:43:05.374277       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 19:43:05.375200       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 19:43:05.375296       1 server_linux.go:169] "Using iptables Proxier"
	I0819 19:43:05.379518       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 19:43:05.379780       1 server.go:483] "Version info" version="v1.31.0"
	I0819 19:43:05.379966       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:43:05.381719       1 config.go:197] "Starting service config controller"
	I0819 19:43:05.383880       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 19:43:05.382902       1 config.go:104] "Starting endpoint slice config controller"
	I0819 19:43:05.385238       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 19:43:05.383415       1 config.go:326] "Starting node config controller"
	I0819 19:43:05.385275       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 19:43:05.486100       1 shared_informer.go:320] Caches are synced for node config
	I0819 19:43:05.486211       1 shared_informer.go:320] Caches are synced for service config
	I0819 19:43:05.486222       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9d9ef8b8013cdc17033d16419fd14ec8e27d80e9deedb824c413d93e9da4fb06] <==
	I0819 19:43:01.331795       1 serving.go:386] Generated self-signed cert in-memory
	W0819 19:43:02.679636       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 19:43:02.679675       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 19:43:02.679687       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 19:43:02.679693       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 19:43:02.763039       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 19:43:02.763075       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:43:02.768460       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 19:43:02.768522       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 19:43:02.769036       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 19:43:02.769096       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 19:43:02.868955       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e83dd57bfe6d4f267dc1e80968e066a1107866708e1ed336dd94dde65dc97994] <==
	E0819 19:36:27.224902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.301041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 19:36:27.301099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.357308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 19:36:27.357357       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.406339       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 19:36:27.406386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.441394       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 19:36:27.441441       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.485460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 19:36:27.485765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.488473       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 19:36:27.489155       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.551043       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 19:36:27.551246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.612400       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 19:36:27.612676       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.712588       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 19:36:27.712743       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.722878       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 19:36:27.722997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:36:27.877783       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 19:36:27.877917       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 19:36:29.487473       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 19:41:23.592549       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 19 19:45:49 multinode-548379 kubelet[2942]: E0819 19:45:49.416181    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096749415773113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:45:59 multinode-548379 kubelet[2942]: E0819 19:45:59.397576    2942 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 19:45:59 multinode-548379 kubelet[2942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 19:45:59 multinode-548379 kubelet[2942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 19:45:59 multinode-548379 kubelet[2942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 19:45:59 multinode-548379 kubelet[2942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:45:59 multinode-548379 kubelet[2942]: E0819 19:45:59.417767    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096759417491059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:45:59 multinode-548379 kubelet[2942]: E0819 19:45:59.417793    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096759417491059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:46:09 multinode-548379 kubelet[2942]: E0819 19:46:09.420960    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096769420660276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:46:09 multinode-548379 kubelet[2942]: E0819 19:46:09.421000    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096769420660276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:46:19 multinode-548379 kubelet[2942]: E0819 19:46:19.424455    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096779423184776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:46:19 multinode-548379 kubelet[2942]: E0819 19:46:19.424573    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096779423184776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:46:29 multinode-548379 kubelet[2942]: E0819 19:46:29.426823    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096789426626719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:46:29 multinode-548379 kubelet[2942]: E0819 19:46:29.426849    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096789426626719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:46:39 multinode-548379 kubelet[2942]: E0819 19:46:39.428927    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096799428481490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:46:39 multinode-548379 kubelet[2942]: E0819 19:46:39.428972    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096799428481490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:46:49 multinode-548379 kubelet[2942]: E0819 19:46:49.430967    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096809430183451,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:46:49 multinode-548379 kubelet[2942]: E0819 19:46:49.431006    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096809430183451,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:46:59 multinode-548379 kubelet[2942]: E0819 19:46:59.397684    2942 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 19:46:59 multinode-548379 kubelet[2942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 19:46:59 multinode-548379 kubelet[2942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 19:46:59 multinode-548379 kubelet[2942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 19:46:59 multinode-548379 kubelet[2942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:46:59 multinode-548379 kubelet[2942]: E0819 19:46:59.432356    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096819431895226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:46:59 multinode-548379 kubelet[2942]: E0819 19:46:59.432379    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096819431895226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:47:05.382916  472912 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-430949/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-548379 -n multinode-548379
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-548379 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.38s)

                                                
                                    
x
+
TestPreload (186s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-247827 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-247827 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m31.137767018s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-247827 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-247827 image pull gcr.io/k8s-minikube/busybox: (1.715985519s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-247827
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-247827: (6.619025537s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-247827 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-247827 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.485645399s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-247827 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-08-19 19:53:57.121757782 +0000 UTC m=+4655.319035633
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-247827 -n test-preload-247827
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-247827 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-247827 logs -n 25: (1.204986957s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-548379 ssh -n                                                                 | multinode-548379     | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n multinode-548379 sudo cat                                       | multinode-548379     | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | /home/docker/cp-test_multinode-548379-m03_multinode-548379.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-548379 cp multinode-548379-m03:/home/docker/cp-test.txt                       | multinode-548379     | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m02:/home/docker/cp-test_multinode-548379-m03_multinode-548379-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n                                                                 | multinode-548379     | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | multinode-548379-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-548379 ssh -n multinode-548379-m02 sudo cat                                   | multinode-548379     | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	|         | /home/docker/cp-test_multinode-548379-m03_multinode-548379-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-548379 node stop m03                                                          | multinode-548379     | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:38 UTC |
	| node    | multinode-548379 node start                                                             | multinode-548379     | jenkins | v1.33.1 | 19 Aug 24 19:38 UTC | 19 Aug 24 19:39 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-548379                                                                | multinode-548379     | jenkins | v1.33.1 | 19 Aug 24 19:39 UTC |                     |
	| stop    | -p multinode-548379                                                                     | multinode-548379     | jenkins | v1.33.1 | 19 Aug 24 19:39 UTC |                     |
	| start   | -p multinode-548379                                                                     | multinode-548379     | jenkins | v1.33.1 | 19 Aug 24 19:41 UTC | 19 Aug 24 19:44 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-548379                                                                | multinode-548379     | jenkins | v1.33.1 | 19 Aug 24 19:44 UTC |                     |
	| node    | multinode-548379 node delete                                                            | multinode-548379     | jenkins | v1.33.1 | 19 Aug 24 19:44 UTC | 19 Aug 24 19:44 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-548379 stop                                                                   | multinode-548379     | jenkins | v1.33.1 | 19 Aug 24 19:44 UTC |                     |
	| start   | -p multinode-548379                                                                     | multinode-548379     | jenkins | v1.33.1 | 19 Aug 24 19:47 UTC | 19 Aug 24 19:50 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-548379                                                                | multinode-548379     | jenkins | v1.33.1 | 19 Aug 24 19:50 UTC |                     |
	| start   | -p multinode-548379-m02                                                                 | multinode-548379-m02 | jenkins | v1.33.1 | 19 Aug 24 19:50 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-548379-m03                                                                 | multinode-548379-m03 | jenkins | v1.33.1 | 19 Aug 24 19:50 UTC | 19 Aug 24 19:50 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-548379                                                                 | multinode-548379     | jenkins | v1.33.1 | 19 Aug 24 19:50 UTC |                     |
	| delete  | -p multinode-548379-m03                                                                 | multinode-548379-m03 | jenkins | v1.33.1 | 19 Aug 24 19:50 UTC | 19 Aug 24 19:50 UTC |
	| delete  | -p multinode-548379                                                                     | multinode-548379     | jenkins | v1.33.1 | 19 Aug 24 19:50 UTC | 19 Aug 24 19:50 UTC |
	| start   | -p test-preload-247827                                                                  | test-preload-247827  | jenkins | v1.33.1 | 19 Aug 24 19:50 UTC | 19 Aug 24 19:52 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-247827 image pull                                                          | test-preload-247827  | jenkins | v1.33.1 | 19 Aug 24 19:52 UTC | 19 Aug 24 19:52 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-247827                                                                  | test-preload-247827  | jenkins | v1.33.1 | 19 Aug 24 19:52 UTC | 19 Aug 24 19:52 UTC |
	| start   | -p test-preload-247827                                                                  | test-preload-247827  | jenkins | v1.33.1 | 19 Aug 24 19:52 UTC | 19 Aug 24 19:53 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-247827 image list                                                          | test-preload-247827  | jenkins | v1.33.1 | 19 Aug 24 19:53 UTC | 19 Aug 24 19:53 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:52:33
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:52:33.457946  475310 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:52:33.458240  475310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:52:33.458250  475310 out.go:358] Setting ErrFile to fd 2...
	I0819 19:52:33.458256  475310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:52:33.458476  475310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:52:33.459045  475310 out.go:352] Setting JSON to false
	I0819 19:52:33.460018  475310 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12904,"bootTime":1724084249,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:52:33.460082  475310 start.go:139] virtualization: kvm guest
	I0819 19:52:33.462142  475310 out.go:177] * [test-preload-247827] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:52:33.463527  475310 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 19:52:33.463532  475310 notify.go:220] Checking for updates...
	I0819 19:52:33.465998  475310 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:52:33.467149  475310 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:52:33.468200  475310 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:52:33.469274  475310 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:52:33.470419  475310 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:52:33.471906  475310 config.go:182] Loaded profile config "test-preload-247827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0819 19:52:33.472304  475310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:52:33.472374  475310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:52:33.488072  475310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43675
	I0819 19:52:33.488537  475310 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:52:33.489207  475310 main.go:141] libmachine: Using API Version  1
	I0819 19:52:33.489233  475310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:52:33.489622  475310 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:52:33.489896  475310 main.go:141] libmachine: (test-preload-247827) Calling .DriverName
	I0819 19:52:33.491559  475310 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 19:52:33.492600  475310 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 19:52:33.492939  475310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:52:33.492983  475310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:52:33.508199  475310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41485
	I0819 19:52:33.508661  475310 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:52:33.509252  475310 main.go:141] libmachine: Using API Version  1
	I0819 19:52:33.509289  475310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:52:33.509661  475310 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:52:33.509863  475310 main.go:141] libmachine: (test-preload-247827) Calling .DriverName
	I0819 19:52:33.547120  475310 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 19:52:33.548287  475310 start.go:297] selected driver: kvm2
	I0819 19:52:33.548308  475310 start.go:901] validating driver "kvm2" against &{Name:test-preload-247827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-247827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:52:33.548458  475310 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:52:33.549204  475310 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:52:33.549321  475310 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:52:33.565341  475310 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:52:33.565779  475310 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:52:33.565843  475310 cni.go:84] Creating CNI manager for ""
	I0819 19:52:33.565864  475310 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:52:33.565940  475310 start.go:340] cluster config:
	{Name:test-preload-247827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-247827 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:52:33.566068  475310 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:52:33.567852  475310 out.go:177] * Starting "test-preload-247827" primary control-plane node in "test-preload-247827" cluster
	I0819 19:52:33.568932  475310 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0819 19:52:33.594644  475310 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0819 19:52:33.594687  475310 cache.go:56] Caching tarball of preloaded images
	I0819 19:52:33.594851  475310 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0819 19:52:33.596492  475310 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0819 19:52:33.597843  475310 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0819 19:52:33.620953  475310 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0819 19:52:36.970226  475310 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0819 19:52:36.970364  475310 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0819 19:52:37.838689  475310 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0819 19:52:37.838838  475310 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/test-preload-247827/config.json ...
	I0819 19:52:37.839097  475310 start.go:360] acquireMachinesLock for test-preload-247827: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:52:37.839182  475310 start.go:364] duration metric: took 57.69µs to acquireMachinesLock for "test-preload-247827"
	I0819 19:52:37.839203  475310 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:52:37.839211  475310 fix.go:54] fixHost starting: 
	I0819 19:52:37.839546  475310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:52:37.839582  475310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:52:37.854394  475310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0819 19:52:37.854844  475310 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:52:37.855404  475310 main.go:141] libmachine: Using API Version  1
	I0819 19:52:37.855431  475310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:52:37.855834  475310 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:52:37.856040  475310 main.go:141] libmachine: (test-preload-247827) Calling .DriverName
	I0819 19:52:37.856233  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetState
	I0819 19:52:37.857946  475310 fix.go:112] recreateIfNeeded on test-preload-247827: state=Stopped err=<nil>
	I0819 19:52:37.857973  475310 main.go:141] libmachine: (test-preload-247827) Calling .DriverName
	W0819 19:52:37.858140  475310 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:52:37.860057  475310 out.go:177] * Restarting existing kvm2 VM for "test-preload-247827" ...
	I0819 19:52:37.861269  475310 main.go:141] libmachine: (test-preload-247827) Calling .Start
	I0819 19:52:37.861557  475310 main.go:141] libmachine: (test-preload-247827) Ensuring networks are active...
	I0819 19:52:37.862408  475310 main.go:141] libmachine: (test-preload-247827) Ensuring network default is active
	I0819 19:52:37.862767  475310 main.go:141] libmachine: (test-preload-247827) Ensuring network mk-test-preload-247827 is active
	I0819 19:52:37.863124  475310 main.go:141] libmachine: (test-preload-247827) Getting domain xml...
	I0819 19:52:37.863919  475310 main.go:141] libmachine: (test-preload-247827) Creating domain...
	I0819 19:52:39.081298  475310 main.go:141] libmachine: (test-preload-247827) Waiting to get IP...
	I0819 19:52:39.082043  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:39.082390  475310 main.go:141] libmachine: (test-preload-247827) DBG | unable to find current IP address of domain test-preload-247827 in network mk-test-preload-247827
	I0819 19:52:39.082461  475310 main.go:141] libmachine: (test-preload-247827) DBG | I0819 19:52:39.082376  475362 retry.go:31] will retry after 219.342504ms: waiting for machine to come up
	I0819 19:52:39.303969  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:39.304544  475310 main.go:141] libmachine: (test-preload-247827) DBG | unable to find current IP address of domain test-preload-247827 in network mk-test-preload-247827
	I0819 19:52:39.304576  475310 main.go:141] libmachine: (test-preload-247827) DBG | I0819 19:52:39.304454  475362 retry.go:31] will retry after 375.496521ms: waiting for machine to come up
	I0819 19:52:39.681123  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:39.681579  475310 main.go:141] libmachine: (test-preload-247827) DBG | unable to find current IP address of domain test-preload-247827 in network mk-test-preload-247827
	I0819 19:52:39.681603  475310 main.go:141] libmachine: (test-preload-247827) DBG | I0819 19:52:39.681552  475362 retry.go:31] will retry after 478.424924ms: waiting for machine to come up
	I0819 19:52:40.161251  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:40.161690  475310 main.go:141] libmachine: (test-preload-247827) DBG | unable to find current IP address of domain test-preload-247827 in network mk-test-preload-247827
	I0819 19:52:40.161722  475310 main.go:141] libmachine: (test-preload-247827) DBG | I0819 19:52:40.161638  475362 retry.go:31] will retry after 533.361356ms: waiting for machine to come up
	I0819 19:52:40.696413  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:40.696775  475310 main.go:141] libmachine: (test-preload-247827) DBG | unable to find current IP address of domain test-preload-247827 in network mk-test-preload-247827
	I0819 19:52:40.696794  475310 main.go:141] libmachine: (test-preload-247827) DBG | I0819 19:52:40.696741  475362 retry.go:31] will retry after 577.397551ms: waiting for machine to come up
	I0819 19:52:41.275489  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:41.275946  475310 main.go:141] libmachine: (test-preload-247827) DBG | unable to find current IP address of domain test-preload-247827 in network mk-test-preload-247827
	I0819 19:52:41.275978  475310 main.go:141] libmachine: (test-preload-247827) DBG | I0819 19:52:41.275894  475362 retry.go:31] will retry after 949.461241ms: waiting for machine to come up
	I0819 19:52:42.226704  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:42.227129  475310 main.go:141] libmachine: (test-preload-247827) DBG | unable to find current IP address of domain test-preload-247827 in network mk-test-preload-247827
	I0819 19:52:42.227163  475310 main.go:141] libmachine: (test-preload-247827) DBG | I0819 19:52:42.227112  475362 retry.go:31] will retry after 1.073637877s: waiting for machine to come up
	I0819 19:52:43.303154  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:43.303503  475310 main.go:141] libmachine: (test-preload-247827) DBG | unable to find current IP address of domain test-preload-247827 in network mk-test-preload-247827
	I0819 19:52:43.303528  475310 main.go:141] libmachine: (test-preload-247827) DBG | I0819 19:52:43.303467  475362 retry.go:31] will retry after 1.390277068s: waiting for machine to come up
	I0819 19:52:44.696039  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:44.696578  475310 main.go:141] libmachine: (test-preload-247827) DBG | unable to find current IP address of domain test-preload-247827 in network mk-test-preload-247827
	I0819 19:52:44.696601  475310 main.go:141] libmachine: (test-preload-247827) DBG | I0819 19:52:44.696514  475362 retry.go:31] will retry after 1.56922479s: waiting for machine to come up
	I0819 19:52:46.268163  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:46.268644  475310 main.go:141] libmachine: (test-preload-247827) DBG | unable to find current IP address of domain test-preload-247827 in network mk-test-preload-247827
	I0819 19:52:46.268678  475310 main.go:141] libmachine: (test-preload-247827) DBG | I0819 19:52:46.268583  475362 retry.go:31] will retry after 1.595433555s: waiting for machine to come up
	I0819 19:52:47.866006  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:47.866374  475310 main.go:141] libmachine: (test-preload-247827) DBG | unable to find current IP address of domain test-preload-247827 in network mk-test-preload-247827
	I0819 19:52:47.866403  475310 main.go:141] libmachine: (test-preload-247827) DBG | I0819 19:52:47.866314  475362 retry.go:31] will retry after 2.358035188s: waiting for machine to come up
	I0819 19:52:50.226967  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:50.227413  475310 main.go:141] libmachine: (test-preload-247827) DBG | unable to find current IP address of domain test-preload-247827 in network mk-test-preload-247827
	I0819 19:52:50.227437  475310 main.go:141] libmachine: (test-preload-247827) DBG | I0819 19:52:50.227371  475362 retry.go:31] will retry after 3.176495475s: waiting for machine to come up
	I0819 19:52:53.405110  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:53.405605  475310 main.go:141] libmachine: (test-preload-247827) DBG | unable to find current IP address of domain test-preload-247827 in network mk-test-preload-247827
	I0819 19:52:53.405643  475310 main.go:141] libmachine: (test-preload-247827) DBG | I0819 19:52:53.405568  475362 retry.go:31] will retry after 3.263434614s: waiting for machine to come up
	I0819 19:52:56.673049  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:56.673515  475310 main.go:141] libmachine: (test-preload-247827) Found IP for machine: 192.168.39.61
	I0819 19:52:56.673535  475310 main.go:141] libmachine: (test-preload-247827) Reserving static IP address...
	I0819 19:52:56.673553  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has current primary IP address 192.168.39.61 and MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:56.674007  475310 main.go:141] libmachine: (test-preload-247827) Reserved static IP address: 192.168.39.61
	I0819 19:52:56.674042  475310 main.go:141] libmachine: (test-preload-247827) DBG | found host DHCP lease matching {name: "test-preload-247827", mac: "52:54:00:05:9d:b9", ip: "192.168.39.61"} in network mk-test-preload-247827: {Iface:virbr1 ExpiryTime:2024-08-19 20:52:48 +0000 UTC Type:0 Mac:52:54:00:05:9d:b9 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-247827 Clientid:01:52:54:00:05:9d:b9}
	I0819 19:52:56.674054  475310 main.go:141] libmachine: (test-preload-247827) Waiting for SSH to be available...
	I0819 19:52:56.674081  475310 main.go:141] libmachine: (test-preload-247827) DBG | skip adding static IP to network mk-test-preload-247827 - found existing host DHCP lease matching {name: "test-preload-247827", mac: "52:54:00:05:9d:b9", ip: "192.168.39.61"}
	I0819 19:52:56.674095  475310 main.go:141] libmachine: (test-preload-247827) DBG | Getting to WaitForSSH function...
	I0819 19:52:56.675905  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:56.676296  475310 main.go:141] libmachine: (test-preload-247827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:9d:b9", ip: ""} in network mk-test-preload-247827: {Iface:virbr1 ExpiryTime:2024-08-19 20:52:48 +0000 UTC Type:0 Mac:52:54:00:05:9d:b9 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-247827 Clientid:01:52:54:00:05:9d:b9}
	I0819 19:52:56.676327  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined IP address 192.168.39.61 and MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:56.676412  475310 main.go:141] libmachine: (test-preload-247827) DBG | Using SSH client type: external
	I0819 19:52:56.676439  475310 main.go:141] libmachine: (test-preload-247827) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/test-preload-247827/id_rsa (-rw-------)
	I0819 19:52:56.676473  475310 main.go:141] libmachine: (test-preload-247827) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-430949/.minikube/machines/test-preload-247827/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:52:56.676487  475310 main.go:141] libmachine: (test-preload-247827) DBG | About to run SSH command:
	I0819 19:52:56.676500  475310 main.go:141] libmachine: (test-preload-247827) DBG | exit 0
	I0819 19:52:56.801044  475310 main.go:141] libmachine: (test-preload-247827) DBG | SSH cmd err, output: <nil>: 
	I0819 19:52:56.801422  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetConfigRaw
	I0819 19:52:56.802110  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetIP
	I0819 19:52:56.804542  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:56.804921  475310 main.go:141] libmachine: (test-preload-247827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:9d:b9", ip: ""} in network mk-test-preload-247827: {Iface:virbr1 ExpiryTime:2024-08-19 20:52:48 +0000 UTC Type:0 Mac:52:54:00:05:9d:b9 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-247827 Clientid:01:52:54:00:05:9d:b9}
	I0819 19:52:56.804955  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined IP address 192.168.39.61 and MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:56.805220  475310 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/test-preload-247827/config.json ...
	I0819 19:52:56.805497  475310 machine.go:93] provisionDockerMachine start ...
	I0819 19:52:56.805525  475310 main.go:141] libmachine: (test-preload-247827) Calling .DriverName
	I0819 19:52:56.805780  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHHostname
	I0819 19:52:56.808340  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:56.808769  475310 main.go:141] libmachine: (test-preload-247827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:9d:b9", ip: ""} in network mk-test-preload-247827: {Iface:virbr1 ExpiryTime:2024-08-19 20:52:48 +0000 UTC Type:0 Mac:52:54:00:05:9d:b9 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-247827 Clientid:01:52:54:00:05:9d:b9}
	I0819 19:52:56.808805  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined IP address 192.168.39.61 and MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:56.809012  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHPort
	I0819 19:52:56.809273  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHKeyPath
	I0819 19:52:56.809459  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHKeyPath
	I0819 19:52:56.809629  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHUsername
	I0819 19:52:56.809806  475310 main.go:141] libmachine: Using SSH client type: native
	I0819 19:52:56.810032  475310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0819 19:52:56.810047  475310 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:52:56.913311  475310 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:52:56.913343  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetMachineName
	I0819 19:52:56.913597  475310 buildroot.go:166] provisioning hostname "test-preload-247827"
	I0819 19:52:56.913632  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetMachineName
	I0819 19:52:56.913851  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHHostname
	I0819 19:52:56.916376  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:56.916737  475310 main.go:141] libmachine: (test-preload-247827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:9d:b9", ip: ""} in network mk-test-preload-247827: {Iface:virbr1 ExpiryTime:2024-08-19 20:52:48 +0000 UTC Type:0 Mac:52:54:00:05:9d:b9 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-247827 Clientid:01:52:54:00:05:9d:b9}
	I0819 19:52:56.916769  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined IP address 192.168.39.61 and MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:56.916932  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHPort
	I0819 19:52:56.917127  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHKeyPath
	I0819 19:52:56.917312  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHKeyPath
	I0819 19:52:56.917480  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHUsername
	I0819 19:52:56.917673  475310 main.go:141] libmachine: Using SSH client type: native
	I0819 19:52:56.917856  475310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0819 19:52:56.917871  475310 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-247827 && echo "test-preload-247827" | sudo tee /etc/hostname
	I0819 19:52:57.035093  475310 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-247827
	
	I0819 19:52:57.035126  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHHostname
	I0819 19:52:57.037914  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.038221  475310 main.go:141] libmachine: (test-preload-247827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:9d:b9", ip: ""} in network mk-test-preload-247827: {Iface:virbr1 ExpiryTime:2024-08-19 20:52:48 +0000 UTC Type:0 Mac:52:54:00:05:9d:b9 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-247827 Clientid:01:52:54:00:05:9d:b9}
	I0819 19:52:57.038248  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined IP address 192.168.39.61 and MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.038416  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHPort
	I0819 19:52:57.038634  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHKeyPath
	I0819 19:52:57.038823  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHKeyPath
	I0819 19:52:57.038991  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHUsername
	I0819 19:52:57.039170  475310 main.go:141] libmachine: Using SSH client type: native
	I0819 19:52:57.039351  475310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0819 19:52:57.039367  475310 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-247827' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-247827/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-247827' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:52:57.149931  475310 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:52:57.149964  475310 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 19:52:57.149996  475310 buildroot.go:174] setting up certificates
	I0819 19:52:57.150009  475310 provision.go:84] configureAuth start
	I0819 19:52:57.150021  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetMachineName
	I0819 19:52:57.150348  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetIP
	I0819 19:52:57.153292  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.153542  475310 main.go:141] libmachine: (test-preload-247827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:9d:b9", ip: ""} in network mk-test-preload-247827: {Iface:virbr1 ExpiryTime:2024-08-19 20:52:48 +0000 UTC Type:0 Mac:52:54:00:05:9d:b9 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-247827 Clientid:01:52:54:00:05:9d:b9}
	I0819 19:52:57.153571  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined IP address 192.168.39.61 and MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.153794  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHHostname
	I0819 19:52:57.155898  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.156223  475310 main.go:141] libmachine: (test-preload-247827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:9d:b9", ip: ""} in network mk-test-preload-247827: {Iface:virbr1 ExpiryTime:2024-08-19 20:52:48 +0000 UTC Type:0 Mac:52:54:00:05:9d:b9 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-247827 Clientid:01:52:54:00:05:9d:b9}
	I0819 19:52:57.156253  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined IP address 192.168.39.61 and MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.156390  475310 provision.go:143] copyHostCerts
	I0819 19:52:57.156453  475310 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 19:52:57.156476  475310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:52:57.156574  475310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 19:52:57.156744  475310 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 19:52:57.156767  475310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:52:57.156809  475310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 19:52:57.156884  475310 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 19:52:57.156891  475310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:52:57.156914  475310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 19:52:57.156970  475310 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.test-preload-247827 san=[127.0.0.1 192.168.39.61 localhost minikube test-preload-247827]
	I0819 19:52:57.255168  475310 provision.go:177] copyRemoteCerts
	I0819 19:52:57.255231  475310 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:52:57.255265  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHHostname
	I0819 19:52:57.257830  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.258116  475310 main.go:141] libmachine: (test-preload-247827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:9d:b9", ip: ""} in network mk-test-preload-247827: {Iface:virbr1 ExpiryTime:2024-08-19 20:52:48 +0000 UTC Type:0 Mac:52:54:00:05:9d:b9 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-247827 Clientid:01:52:54:00:05:9d:b9}
	I0819 19:52:57.258156  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined IP address 192.168.39.61 and MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.258327  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHPort
	I0819 19:52:57.258540  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHKeyPath
	I0819 19:52:57.258714  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHUsername
	I0819 19:52:57.258866  475310 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/test-preload-247827/id_rsa Username:docker}
	I0819 19:52:57.340985  475310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:52:57.365009  475310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:52:57.388968  475310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 19:52:57.412957  475310 provision.go:87] duration metric: took 262.930227ms to configureAuth
	I0819 19:52:57.412994  475310 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:52:57.413215  475310 config.go:182] Loaded profile config "test-preload-247827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0819 19:52:57.413327  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHHostname
	I0819 19:52:57.416182  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.416602  475310 main.go:141] libmachine: (test-preload-247827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:9d:b9", ip: ""} in network mk-test-preload-247827: {Iface:virbr1 ExpiryTime:2024-08-19 20:52:48 +0000 UTC Type:0 Mac:52:54:00:05:9d:b9 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-247827 Clientid:01:52:54:00:05:9d:b9}
	I0819 19:52:57.416649  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined IP address 192.168.39.61 and MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.416840  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHPort
	I0819 19:52:57.417042  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHKeyPath
	I0819 19:52:57.417212  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHKeyPath
	I0819 19:52:57.417353  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHUsername
	I0819 19:52:57.417565  475310 main.go:141] libmachine: Using SSH client type: native
	I0819 19:52:57.417785  475310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0819 19:52:57.417812  475310 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:52:57.683030  475310 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:52:57.683063  475310 machine.go:96] duration metric: took 877.54757ms to provisionDockerMachine
	I0819 19:52:57.683079  475310 start.go:293] postStartSetup for "test-preload-247827" (driver="kvm2")
	I0819 19:52:57.683095  475310 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:52:57.683115  475310 main.go:141] libmachine: (test-preload-247827) Calling .DriverName
	I0819 19:52:57.683470  475310 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:52:57.683497  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHHostname
	I0819 19:52:57.686128  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.686646  475310 main.go:141] libmachine: (test-preload-247827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:9d:b9", ip: ""} in network mk-test-preload-247827: {Iface:virbr1 ExpiryTime:2024-08-19 20:52:48 +0000 UTC Type:0 Mac:52:54:00:05:9d:b9 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-247827 Clientid:01:52:54:00:05:9d:b9}
	I0819 19:52:57.686676  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined IP address 192.168.39.61 and MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.686862  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHPort
	I0819 19:52:57.687089  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHKeyPath
	I0819 19:52:57.687286  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHUsername
	I0819 19:52:57.687450  475310 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/test-preload-247827/id_rsa Username:docker}
	I0819 19:52:57.768012  475310 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:52:57.772102  475310 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:52:57.772131  475310 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 19:52:57.772202  475310 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 19:52:57.772278  475310 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 19:52:57.772366  475310 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:52:57.781634  475310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:52:57.805230  475310 start.go:296] duration metric: took 122.13546ms for postStartSetup
	I0819 19:52:57.805275  475310 fix.go:56] duration metric: took 19.966065235s for fixHost
	I0819 19:52:57.805298  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHHostname
	I0819 19:52:57.807835  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.808134  475310 main.go:141] libmachine: (test-preload-247827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:9d:b9", ip: ""} in network mk-test-preload-247827: {Iface:virbr1 ExpiryTime:2024-08-19 20:52:48 +0000 UTC Type:0 Mac:52:54:00:05:9d:b9 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-247827 Clientid:01:52:54:00:05:9d:b9}
	I0819 19:52:57.808158  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined IP address 192.168.39.61 and MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.808289  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHPort
	I0819 19:52:57.808547  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHKeyPath
	I0819 19:52:57.808705  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHKeyPath
	I0819 19:52:57.808871  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHUsername
	I0819 19:52:57.809050  475310 main.go:141] libmachine: Using SSH client type: native
	I0819 19:52:57.809265  475310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0819 19:52:57.809276  475310 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:52:57.913982  475310 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724097177.889021508
	
	I0819 19:52:57.914007  475310 fix.go:216] guest clock: 1724097177.889021508
	I0819 19:52:57.914019  475310 fix.go:229] Guest: 2024-08-19 19:52:57.889021508 +0000 UTC Remote: 2024-08-19 19:52:57.805279099 +0000 UTC m=+24.384564117 (delta=83.742409ms)
	I0819 19:52:57.914045  475310 fix.go:200] guest clock delta is within tolerance: 83.742409ms
	I0819 19:52:57.914052  475310 start.go:83] releasing machines lock for "test-preload-247827", held for 20.074857842s
	I0819 19:52:57.914077  475310 main.go:141] libmachine: (test-preload-247827) Calling .DriverName
	I0819 19:52:57.914361  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetIP
	I0819 19:52:57.917006  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.917339  475310 main.go:141] libmachine: (test-preload-247827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:9d:b9", ip: ""} in network mk-test-preload-247827: {Iface:virbr1 ExpiryTime:2024-08-19 20:52:48 +0000 UTC Type:0 Mac:52:54:00:05:9d:b9 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-247827 Clientid:01:52:54:00:05:9d:b9}
	I0819 19:52:57.917370  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined IP address 192.168.39.61 and MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.917515  475310 main.go:141] libmachine: (test-preload-247827) Calling .DriverName
	I0819 19:52:57.918092  475310 main.go:141] libmachine: (test-preload-247827) Calling .DriverName
	I0819 19:52:57.918291  475310 main.go:141] libmachine: (test-preload-247827) Calling .DriverName
	I0819 19:52:57.918369  475310 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:52:57.918419  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHHostname
	I0819 19:52:57.918521  475310 ssh_runner.go:195] Run: cat /version.json
	I0819 19:52:57.918536  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHHostname
	I0819 19:52:57.921345  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.921685  475310 main.go:141] libmachine: (test-preload-247827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:9d:b9", ip: ""} in network mk-test-preload-247827: {Iface:virbr1 ExpiryTime:2024-08-19 20:52:48 +0000 UTC Type:0 Mac:52:54:00:05:9d:b9 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-247827 Clientid:01:52:54:00:05:9d:b9}
	I0819 19:52:57.921718  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined IP address 192.168.39.61 and MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.921784  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.921921  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHPort
	I0819 19:52:57.922110  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHKeyPath
	I0819 19:52:57.922206  475310 main.go:141] libmachine: (test-preload-247827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:9d:b9", ip: ""} in network mk-test-preload-247827: {Iface:virbr1 ExpiryTime:2024-08-19 20:52:48 +0000 UTC Type:0 Mac:52:54:00:05:9d:b9 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-247827 Clientid:01:52:54:00:05:9d:b9}
	I0819 19:52:57.922231  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined IP address 192.168.39.61 and MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:57.922277  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHUsername
	I0819 19:52:57.922383  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHPort
	I0819 19:52:57.922460  475310 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/test-preload-247827/id_rsa Username:docker}
	I0819 19:52:57.922508  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHKeyPath
	I0819 19:52:57.922711  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHUsername
	I0819 19:52:57.922869  475310 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/test-preload-247827/id_rsa Username:docker}
	I0819 19:52:58.018538  475310 ssh_runner.go:195] Run: systemctl --version
	I0819 19:52:58.024288  475310 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:52:58.169491  475310 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:52:58.175398  475310 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:52:58.175474  475310 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:52:58.191795  475310 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:52:58.191827  475310 start.go:495] detecting cgroup driver to use...
	I0819 19:52:58.191898  475310 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:52:58.208389  475310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:52:58.222944  475310 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:52:58.223024  475310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:52:58.237149  475310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:52:58.251098  475310 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:52:58.373382  475310 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:52:58.528023  475310 docker.go:233] disabling docker service ...
	I0819 19:52:58.528119  475310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:52:58.542300  475310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:52:58.555186  475310 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:52:58.673371  475310 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:52:58.795050  475310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:52:58.813788  475310 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:52:58.831618  475310 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0819 19:52:58.831701  475310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:52:58.841821  475310 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:52:58.841890  475310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:52:58.852144  475310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:52:58.862306  475310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:52:58.872430  475310 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:52:58.882859  475310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:52:58.893222  475310 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:52:58.910113  475310 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:52:58.920332  475310 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:52:58.929847  475310 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:52:58.929906  475310 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:52:58.941908  475310 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:52:58.951112  475310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:52:59.068526  475310 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:52:59.198693  475310 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:52:59.198776  475310 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:52:59.203480  475310 start.go:563] Will wait 60s for crictl version
	I0819 19:52:59.203558  475310 ssh_runner.go:195] Run: which crictl
	I0819 19:52:59.207448  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:52:59.242828  475310 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:52:59.242915  475310 ssh_runner.go:195] Run: crio --version
	I0819 19:52:59.269957  475310 ssh_runner.go:195] Run: crio --version
	I0819 19:52:59.299271  475310 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0819 19:52:59.300556  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetIP
	I0819 19:52:59.303546  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:59.303901  475310 main.go:141] libmachine: (test-preload-247827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:9d:b9", ip: ""} in network mk-test-preload-247827: {Iface:virbr1 ExpiryTime:2024-08-19 20:52:48 +0000 UTC Type:0 Mac:52:54:00:05:9d:b9 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-247827 Clientid:01:52:54:00:05:9d:b9}
	I0819 19:52:59.303931  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined IP address 192.168.39.61 and MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:52:59.304154  475310 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 19:52:59.308646  475310 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:52:59.321287  475310 kubeadm.go:883] updating cluster {Name:test-preload-247827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-247827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:52:59.321418  475310 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0819 19:52:59.321475  475310 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:52:59.360422  475310 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0819 19:52:59.360497  475310 ssh_runner.go:195] Run: which lz4
	I0819 19:52:59.364260  475310 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:52:59.368271  475310 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:52:59.368306  475310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0819 19:53:00.796916  475310 crio.go:462] duration metric: took 1.432683802s to copy over tarball
	I0819 19:53:00.797005  475310 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:53:03.212607  475310 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.41557072s)
	I0819 19:53:03.212636  475310 crio.go:469] duration metric: took 2.415684843s to extract the tarball
	I0819 19:53:03.212644  475310 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:53:03.253570  475310 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:53:03.300982  475310 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0819 19:53:03.301009  475310 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 19:53:03.301054  475310 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:53:03.301106  475310 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0819 19:53:03.301126  475310 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 19:53:03.301193  475310 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 19:53:03.301246  475310 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 19:53:03.301245  475310 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:53:03.301286  475310 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0819 19:53:03.301398  475310 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0819 19:53:03.302499  475310 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 19:53:03.302514  475310 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0819 19:53:03.302526  475310 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:53:03.302532  475310 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 19:53:03.302501  475310 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0819 19:53:03.302570  475310 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0819 19:53:03.302616  475310 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 19:53:03.302617  475310 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:53:03.459680  475310 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0819 19:53:03.464388  475310 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0819 19:53:03.464677  475310 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0819 19:53:03.466750  475310 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0819 19:53:03.488475  475310 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:53:03.499687  475310 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 19:53:03.510714  475310 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0819 19:53:03.510763  475310 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0819 19:53:03.510817  475310 ssh_runner.go:195] Run: which crictl
	I0819 19:53:03.515919  475310 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0819 19:53:03.591011  475310 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0819 19:53:03.591052  475310 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0819 19:53:03.591103  475310 ssh_runner.go:195] Run: which crictl
	I0819 19:53:03.596959  475310 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0819 19:53:03.597002  475310 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0819 19:53:03.597038  475310 ssh_runner.go:195] Run: which crictl
	I0819 19:53:03.621241  475310 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0819 19:53:03.621288  475310 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0819 19:53:03.621330  475310 ssh_runner.go:195] Run: which crictl
	I0819 19:53:03.630251  475310 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0819 19:53:03.630303  475310 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:53:03.630259  475310 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0819 19:53:03.630348  475310 ssh_runner.go:195] Run: which crictl
	I0819 19:53:03.630386  475310 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 19:53:03.630436  475310 ssh_runner.go:195] Run: which crictl
	I0819 19:53:03.630338  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0819 19:53:03.646483  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0819 19:53:03.646535  475310 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0819 19:53:03.646544  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0819 19:53:03.646502  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0819 19:53:03.646583  475310 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0819 19:53:03.646624  475310 ssh_runner.go:195] Run: which crictl
	I0819 19:53:03.646625  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:53:03.696421  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 19:53:03.696446  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0819 19:53:03.756509  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0819 19:53:03.756549  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0819 19:53:03.756509  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:53:03.756609  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0819 19:53:03.756633  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0819 19:53:03.829315  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0819 19:53:03.829315  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 19:53:03.884585  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0819 19:53:03.903281  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:53:03.903282  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0819 19:53:03.911320  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0819 19:53:03.911359  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0819 19:53:03.965344  475310 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0819 19:53:03.965394  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 19:53:03.965455  475310 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0819 19:53:03.992972  475310 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0819 19:53:03.993066  475310 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0819 19:53:04.044022  475310 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:53:04.050714  475310 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0819 19:53:04.050861  475310 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0819 19:53:04.050942  475310 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 19:53:04.051016  475310 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0819 19:53:04.051035  475310 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0819 19:53:04.051082  475310 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0819 19:53:04.073492  475310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0819 19:53:04.073549  475310 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0819 19:53:04.073583  475310 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0819 19:53:04.073600  475310 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0819 19:53:04.073617  475310 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0819 19:53:04.073642  475310 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0819 19:53:04.073645  475310 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0819 19:53:04.214763  475310 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0819 19:53:04.214814  475310 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0819 19:53:04.214863  475310 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0819 19:53:04.214938  475310 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0819 19:53:04.214944  475310 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0819 19:53:04.215040  475310 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0819 19:53:08.078188  475310 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (4.004443339s)
	I0819 19:53:08.078228  475310 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0819 19:53:08.078254  475310 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0819 19:53:08.078255  475310 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (3.863188917s)
	I0819 19:53:08.078294  475310 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0819 19:53:08.078301  475310 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0819 19:53:08.219023  475310 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0819 19:53:08.219081  475310 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0819 19:53:08.219140  475310 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0819 19:53:08.966479  475310 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0819 19:53:08.966542  475310 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0819 19:53:08.966595  475310 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0819 19:53:09.311416  475310 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0819 19:53:09.311464  475310 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0819 19:53:09.311518  475310 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0819 19:53:09.755101  475310 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0819 19:53:09.755154  475310 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0819 19:53:09.755208  475310 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0819 19:53:10.497229  475310 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0819 19:53:10.497290  475310 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0819 19:53:10.497352  475310 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0819 19:53:11.344857  475310 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0819 19:53:11.344919  475310 cache_images.go:123] Successfully loaded all cached images
	I0819 19:53:11.344927  475310 cache_images.go:92] duration metric: took 8.043905483s to LoadCachedImages
	I0819 19:53:11.344946  475310 kubeadm.go:934] updating node { 192.168.39.61 8443 v1.24.4 crio true true} ...
	I0819 19:53:11.345057  475310 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-247827 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-247827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:53:11.345157  475310 ssh_runner.go:195] Run: crio config
	I0819 19:53:11.390598  475310 cni.go:84] Creating CNI manager for ""
	I0819 19:53:11.390621  475310 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:53:11.390631  475310 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:53:11.390651  475310 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.61 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-247827 NodeName:test-preload-247827 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:53:11.390781  475310 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-247827"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:53:11.390844  475310 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0819 19:53:11.400728  475310 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:53:11.400809  475310 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:53:11.410090  475310 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0819 19:53:11.426745  475310 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:53:11.443442  475310 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0819 19:53:11.460492  475310 ssh_runner.go:195] Run: grep 192.168.39.61	control-plane.minikube.internal$ /etc/hosts
	I0819 19:53:11.464210  475310 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:53:11.475914  475310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:53:11.596896  475310 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:53:11.623608  475310 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/test-preload-247827 for IP: 192.168.39.61
	I0819 19:53:11.623633  475310 certs.go:194] generating shared ca certs ...
	I0819 19:53:11.623655  475310 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:53:11.623819  475310 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 19:53:11.623870  475310 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 19:53:11.623885  475310 certs.go:256] generating profile certs ...
	I0819 19:53:11.623980  475310 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/test-preload-247827/client.key
	I0819 19:53:11.624072  475310 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/test-preload-247827/apiserver.key.4fdae874
	I0819 19:53:11.624135  475310 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/test-preload-247827/proxy-client.key
	I0819 19:53:11.624273  475310 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 19:53:11.624317  475310 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 19:53:11.624331  475310 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 19:53:11.624365  475310 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:53:11.624400  475310 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:53:11.624432  475310 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 19:53:11.624517  475310 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:53:11.625248  475310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:53:11.679473  475310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:53:11.714315  475310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:53:11.745254  475310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 19:53:11.779418  475310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/test-preload-247827/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 19:53:11.815878  475310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/test-preload-247827/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:53:11.840455  475310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/test-preload-247827/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:53:11.864787  475310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/test-preload-247827/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 19:53:11.888614  475310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:53:11.912072  475310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 19:53:11.935605  475310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 19:53:11.959418  475310 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:53:11.976001  475310 ssh_runner.go:195] Run: openssl version
	I0819 19:53:11.981621  475310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:53:11.992219  475310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:53:11.996619  475310 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:53:11.996695  475310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:53:12.002504  475310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:53:12.013093  475310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 19:53:12.023885  475310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 19:53:12.028573  475310 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 19:53:12.028647  475310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 19:53:12.034380  475310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 19:53:12.045245  475310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 19:53:12.056031  475310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 19:53:12.060557  475310 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 19:53:12.060636  475310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 19:53:12.066340  475310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:53:12.076926  475310 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:53:12.081425  475310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:53:12.087389  475310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:53:12.093294  475310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:53:12.099419  475310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:53:12.105292  475310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:53:12.111083  475310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:53:12.117022  475310 kubeadm.go:392] StartCluster: {Name:test-preload-247827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-247827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:53:12.117113  475310 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:53:12.117178  475310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:53:12.157242  475310 cri.go:89] found id: ""
	I0819 19:53:12.157314  475310 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:53:12.167195  475310 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:53:12.167223  475310 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:53:12.167279  475310 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:53:12.176756  475310 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:53:12.177291  475310 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-247827" does not appear in /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:53:12.177412  475310 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-430949/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-247827" cluster setting kubeconfig missing "test-preload-247827" context setting]
	I0819 19:53:12.177725  475310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/kubeconfig: {Name:mk1ae3aa741bb2460fc0d48f80867b991b4a0677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:53:12.178317  475310 kapi.go:59] client config for test-preload-247827: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/test-preload-247827/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/test-preload-247827/client.key", CAFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 19:53:12.179001  475310 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:53:12.188578  475310 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.61
	I0819 19:53:12.188620  475310 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:53:12.188636  475310 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:53:12.188692  475310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:53:12.222968  475310 cri.go:89] found id: ""
	I0819 19:53:12.223037  475310 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:53:12.239184  475310 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:53:12.249107  475310 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:53:12.249145  475310 kubeadm.go:157] found existing configuration files:
	
	I0819 19:53:12.249190  475310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:53:12.258334  475310 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:53:12.258407  475310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:53:12.267830  475310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:53:12.277559  475310 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:53:12.277621  475310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:53:12.287544  475310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:53:12.297795  475310 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:53:12.297862  475310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:53:12.308392  475310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:53:12.318578  475310 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:53:12.318661  475310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:53:12.329167  475310 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:53:12.339691  475310 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:53:12.428819  475310 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:53:13.678050  475310 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.249188999s)
	I0819 19:53:13.678095  475310 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:53:13.942499  475310 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:53:14.008594  475310 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:53:14.080183  475310 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:53:14.080279  475310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:53:14.580761  475310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:53:15.080566  475310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:53:15.094386  475310 api_server.go:72] duration metric: took 1.014219434s to wait for apiserver process to appear ...
	I0819 19:53:15.094418  475310 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:53:15.094443  475310 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0819 19:53:15.094999  475310 api_server.go:269] stopped: https://192.168.39.61:8443/healthz: Get "https://192.168.39.61:8443/healthz": dial tcp 192.168.39.61:8443: connect: connection refused
	I0819 19:53:15.594753  475310 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0819 19:53:20.595554  475310 api_server.go:269] stopped: https://192.168.39.61:8443/healthz: Get "https://192.168.39.61:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 19:53:20.595633  475310 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0819 19:53:25.596639  475310 api_server.go:269] stopped: https://192.168.39.61:8443/healthz: Get "https://192.168.39.61:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 19:53:25.596695  475310 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0819 19:53:30.597489  475310 api_server.go:269] stopped: https://192.168.39.61:8443/healthz: Get "https://192.168.39.61:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 19:53:30.597547  475310 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0819 19:53:35.598104  475310 api_server.go:269] stopped: https://192.168.39.61:8443/healthz: Get "https://192.168.39.61:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 19:53:35.598153  475310 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0819 19:53:35.799199  475310 api_server.go:269] stopped: https://192.168.39.61:8443/healthz: Get "https://192.168.39.61:8443/healthz": read tcp 192.168.39.1:37914->192.168.39.61:8443: read: connection reset by peer
	I0819 19:53:36.094588  475310 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0819 19:53:36.095208  475310 api_server.go:269] stopped: https://192.168.39.61:8443/healthz: Get "https://192.168.39.61:8443/healthz": dial tcp 192.168.39.61:8443: connect: connection refused
	I0819 19:53:36.594778  475310 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0819 19:53:39.116689  475310 api_server.go:279] https://192.168.39.61:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:53:39.116725  475310 api_server.go:103] status: https://192.168.39.61:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:53:39.116745  475310 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0819 19:53:39.146505  475310 api_server.go:279] https://192.168.39.61:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:53:39.146541  475310 api_server.go:103] status: https://192.168.39.61:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:53:39.595179  475310 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0819 19:53:39.600647  475310 api_server.go:279] https://192.168.39.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:53:39.600683  475310 api_server.go:103] status: https://192.168.39.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:53:40.095344  475310 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0819 19:53:40.118587  475310 api_server.go:279] https://192.168.39.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:53:40.118623  475310 api_server.go:103] status: https://192.168.39.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:53:40.595310  475310 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0819 19:53:40.601036  475310 api_server.go:279] https://192.168.39.61:8443/healthz returned 200:
	ok
	I0819 19:53:40.607756  475310 api_server.go:141] control plane version: v1.24.4
	I0819 19:53:40.607787  475310 api_server.go:131] duration metric: took 25.513360006s to wait for apiserver health ...
	I0819 19:53:40.607818  475310 cni.go:84] Creating CNI manager for ""
	I0819 19:53:40.607828  475310 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:53:40.609615  475310 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:53:40.610899  475310 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:53:40.626324  475310 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:53:40.643808  475310 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:53:40.643923  475310 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 19:53:40.643947  475310 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 19:53:40.653772  475310 system_pods.go:59] 7 kube-system pods found
	I0819 19:53:40.653811  475310 system_pods.go:61] "coredns-6d4b75cb6d-mqbdv" [ba618be1-cf31-4ece-8851-839c008cb635] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:53:40.653819  475310 system_pods.go:61] "etcd-test-preload-247827" [087fba43-de03-4f47-87ad-331ce7fe7f1b] Running
	I0819 19:53:40.653825  475310 system_pods.go:61] "kube-apiserver-test-preload-247827" [7aa6f5da-0f23-41ef-8c7c-fe60bcd65402] Running
	I0819 19:53:40.653830  475310 system_pods.go:61] "kube-controller-manager-test-preload-247827" [f701f8e2-8fd0-4463-9639-850ec4b2fa20] Running
	I0819 19:53:40.653834  475310 system_pods.go:61] "kube-proxy-sczp2" [d43e1f06-2cdb-4244-bc4b-79f998d6973b] Running
	I0819 19:53:40.653839  475310 system_pods.go:61] "kube-scheduler-test-preload-247827" [fbe31eaa-a82a-4b07-bbd8-578a8ab94c57] Running
	I0819 19:53:40.653851  475310 system_pods.go:61] "storage-provisioner" [367930bf-0d52-488e-9cfc-312f33a8674e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 19:53:40.653862  475310 system_pods.go:74] duration metric: took 10.02225ms to wait for pod list to return data ...
	I0819 19:53:40.653878  475310 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:53:40.658220  475310 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:53:40.658264  475310 node_conditions.go:123] node cpu capacity is 2
	I0819 19:53:40.658290  475310 node_conditions.go:105] duration metric: took 4.405057ms to run NodePressure ...
	I0819 19:53:40.658320  475310 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:53:40.904612  475310 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 19:53:40.910505  475310 retry.go:31] will retry after 127.656864ms: kubelet not initialised
	I0819 19:53:41.044270  475310 retry.go:31] will retry after 187.855442ms: kubelet not initialised
	I0819 19:53:41.237880  475310 retry.go:31] will retry after 775.786679ms: kubelet not initialised
	I0819 19:53:42.019454  475310 kubeadm.go:739] kubelet initialised
	I0819 19:53:42.019484  475310 kubeadm.go:740] duration metric: took 1.114842102s waiting for restarted kubelet to initialise ...
	I0819 19:53:42.019494  475310 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:53:42.025024  475310 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-mqbdv" in "kube-system" namespace to be "Ready" ...
	I0819 19:53:42.030673  475310 pod_ready.go:98] node "test-preload-247827" hosting pod "coredns-6d4b75cb6d-mqbdv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-247827" has status "Ready":"False"
	I0819 19:53:42.030702  475310 pod_ready.go:82] duration metric: took 5.648077ms for pod "coredns-6d4b75cb6d-mqbdv" in "kube-system" namespace to be "Ready" ...
	E0819 19:53:42.030712  475310 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-247827" hosting pod "coredns-6d4b75cb6d-mqbdv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-247827" has status "Ready":"False"
	I0819 19:53:42.030719  475310 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-247827" in "kube-system" namespace to be "Ready" ...
	I0819 19:53:42.036755  475310 pod_ready.go:98] node "test-preload-247827" hosting pod "etcd-test-preload-247827" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-247827" has status "Ready":"False"
	I0819 19:53:42.036786  475310 pod_ready.go:82] duration metric: took 6.048197ms for pod "etcd-test-preload-247827" in "kube-system" namespace to be "Ready" ...
	E0819 19:53:42.036796  475310 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-247827" hosting pod "etcd-test-preload-247827" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-247827" has status "Ready":"False"
	I0819 19:53:42.036804  475310 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-247827" in "kube-system" namespace to be "Ready" ...
	I0819 19:53:42.042430  475310 pod_ready.go:98] node "test-preload-247827" hosting pod "kube-apiserver-test-preload-247827" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-247827" has status "Ready":"False"
	I0819 19:53:42.042460  475310 pod_ready.go:82] duration metric: took 5.649794ms for pod "kube-apiserver-test-preload-247827" in "kube-system" namespace to be "Ready" ...
	E0819 19:53:42.042470  475310 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-247827" hosting pod "kube-apiserver-test-preload-247827" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-247827" has status "Ready":"False"
	I0819 19:53:42.042476  475310 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-247827" in "kube-system" namespace to be "Ready" ...
	I0819 19:53:42.048890  475310 pod_ready.go:98] node "test-preload-247827" hosting pod "kube-controller-manager-test-preload-247827" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-247827" has status "Ready":"False"
	I0819 19:53:42.048925  475310 pod_ready.go:82] duration metric: took 6.438049ms for pod "kube-controller-manager-test-preload-247827" in "kube-system" namespace to be "Ready" ...
	E0819 19:53:42.048938  475310 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-247827" hosting pod "kube-controller-manager-test-preload-247827" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-247827" has status "Ready":"False"
	I0819 19:53:42.048946  475310 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-sczp2" in "kube-system" namespace to be "Ready" ...
	I0819 19:53:42.418392  475310 pod_ready.go:98] node "test-preload-247827" hosting pod "kube-proxy-sczp2" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-247827" has status "Ready":"False"
	I0819 19:53:42.418433  475310 pod_ready.go:82] duration metric: took 369.475889ms for pod "kube-proxy-sczp2" in "kube-system" namespace to be "Ready" ...
	E0819 19:53:42.418447  475310 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-247827" hosting pod "kube-proxy-sczp2" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-247827" has status "Ready":"False"
	I0819 19:53:42.418455  475310 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-247827" in "kube-system" namespace to be "Ready" ...
	I0819 19:53:42.818755  475310 pod_ready.go:98] node "test-preload-247827" hosting pod "kube-scheduler-test-preload-247827" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-247827" has status "Ready":"False"
	I0819 19:53:42.818794  475310 pod_ready.go:82] duration metric: took 400.329554ms for pod "kube-scheduler-test-preload-247827" in "kube-system" namespace to be "Ready" ...
	E0819 19:53:42.818807  475310 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-247827" hosting pod "kube-scheduler-test-preload-247827" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-247827" has status "Ready":"False"
	I0819 19:53:42.818817  475310 pod_ready.go:39] duration metric: took 799.311683ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:53:42.818840  475310 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:53:42.830023  475310 ops.go:34] apiserver oom_adj: -16
	I0819 19:53:42.830051  475310 kubeadm.go:597] duration metric: took 30.662819884s to restartPrimaryControlPlane
	I0819 19:53:42.830064  475310 kubeadm.go:394] duration metric: took 30.713052582s to StartCluster
	I0819 19:53:42.830114  475310 settings.go:142] acquiring lock: {Name:mkc71def6be9966e5f008a22161bf3ed26f482b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:53:42.830203  475310 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:53:42.831033  475310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/kubeconfig: {Name:mk1ae3aa741bb2460fc0d48f80867b991b4a0677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:53:42.831350  475310 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:53:42.831437  475310 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:53:42.831571  475310 addons.go:69] Setting default-storageclass=true in profile "test-preload-247827"
	I0819 19:53:42.831607  475310 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-247827"
	I0819 19:53:42.831636  475310 config.go:182] Loaded profile config "test-preload-247827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0819 19:53:42.831569  475310 addons.go:69] Setting storage-provisioner=true in profile "test-preload-247827"
	I0819 19:53:42.831695  475310 addons.go:234] Setting addon storage-provisioner=true in "test-preload-247827"
	W0819 19:53:42.831704  475310 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:53:42.831730  475310 host.go:66] Checking if "test-preload-247827" exists ...
	I0819 19:53:42.831966  475310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:53:42.832000  475310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:53:42.832090  475310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:53:42.832119  475310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:53:42.832948  475310 out.go:177] * Verifying Kubernetes components...
	I0819 19:53:42.834415  475310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:53:42.848413  475310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40477
	I0819 19:53:42.848427  475310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45885
	I0819 19:53:42.848967  475310 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:53:42.849087  475310 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:53:42.849512  475310 main.go:141] libmachine: Using API Version  1
	I0819 19:53:42.849529  475310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:53:42.849664  475310 main.go:141] libmachine: Using API Version  1
	I0819 19:53:42.849689  475310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:53:42.849892  475310 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:53:42.850003  475310 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:53:42.850206  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetState
	I0819 19:53:42.850439  475310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:53:42.850487  475310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:53:42.852966  475310 kapi.go:59] client config for test-preload-247827: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/test-preload-247827/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/test-preload-247827/client.key", CAFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 19:53:42.853379  475310 addons.go:234] Setting addon default-storageclass=true in "test-preload-247827"
	W0819 19:53:42.853400  475310 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:53:42.853436  475310 host.go:66] Checking if "test-preload-247827" exists ...
	I0819 19:53:42.853849  475310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:53:42.853883  475310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:53:42.867082  475310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43727
	I0819 19:53:42.867622  475310 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:53:42.868185  475310 main.go:141] libmachine: Using API Version  1
	I0819 19:53:42.868212  475310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:53:42.868643  475310 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:53:42.868861  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetState
	I0819 19:53:42.870144  475310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46231
	I0819 19:53:42.870660  475310 main.go:141] libmachine: (test-preload-247827) Calling .DriverName
	I0819 19:53:42.870675  475310 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:53:42.871150  475310 main.go:141] libmachine: Using API Version  1
	I0819 19:53:42.871165  475310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:53:42.871432  475310 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:53:42.872028  475310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:53:42.872065  475310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:53:42.872682  475310 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:53:42.874239  475310 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:53:42.874266  475310 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:53:42.874291  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHHostname
	I0819 19:53:42.877697  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:53:42.878199  475310 main.go:141] libmachine: (test-preload-247827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:9d:b9", ip: ""} in network mk-test-preload-247827: {Iface:virbr1 ExpiryTime:2024-08-19 20:52:48 +0000 UTC Type:0 Mac:52:54:00:05:9d:b9 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-247827 Clientid:01:52:54:00:05:9d:b9}
	I0819 19:53:42.878234  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined IP address 192.168.39.61 and MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:53:42.878479  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHPort
	I0819 19:53:42.878709  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHKeyPath
	I0819 19:53:42.878878  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHUsername
	I0819 19:53:42.879029  475310 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/test-preload-247827/id_rsa Username:docker}
	I0819 19:53:42.888508  475310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39059
	I0819 19:53:42.889053  475310 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:53:42.889712  475310 main.go:141] libmachine: Using API Version  1
	I0819 19:53:42.889738  475310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:53:42.890081  475310 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:53:42.890273  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetState
	I0819 19:53:42.891993  475310 main.go:141] libmachine: (test-preload-247827) Calling .DriverName
	I0819 19:53:42.892246  475310 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:53:42.892264  475310 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:53:42.892283  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHHostname
	I0819 19:53:42.895373  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:53:42.895479  475310 main.go:141] libmachine: (test-preload-247827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:9d:b9", ip: ""} in network mk-test-preload-247827: {Iface:virbr1 ExpiryTime:2024-08-19 20:52:48 +0000 UTC Type:0 Mac:52:54:00:05:9d:b9 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-247827 Clientid:01:52:54:00:05:9d:b9}
	I0819 19:53:42.895517  475310 main.go:141] libmachine: (test-preload-247827) DBG | domain test-preload-247827 has defined IP address 192.168.39.61 and MAC address 52:54:00:05:9d:b9 in network mk-test-preload-247827
	I0819 19:53:42.895690  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHPort
	I0819 19:53:42.895892  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHKeyPath
	I0819 19:53:42.896070  475310 main.go:141] libmachine: (test-preload-247827) Calling .GetSSHUsername
	I0819 19:53:42.896206  475310 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/test-preload-247827/id_rsa Username:docker}
	I0819 19:53:42.991218  475310 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:53:43.012610  475310 node_ready.go:35] waiting up to 6m0s for node "test-preload-247827" to be "Ready" ...
	I0819 19:53:43.082353  475310 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:53:43.092403  475310 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:53:44.041997  475310 main.go:141] libmachine: Making call to close driver server
	I0819 19:53:44.042027  475310 main.go:141] libmachine: (test-preload-247827) Calling .Close
	I0819 19:53:44.042343  475310 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:53:44.042359  475310 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:53:44.042387  475310 main.go:141] libmachine: Making call to close driver server
	I0819 19:53:44.042399  475310 main.go:141] libmachine: (test-preload-247827) Calling .Close
	I0819 19:53:44.042673  475310 main.go:141] libmachine: (test-preload-247827) DBG | Closing plugin on server side
	I0819 19:53:44.042702  475310 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:53:44.042712  475310 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:53:44.050818  475310 main.go:141] libmachine: Making call to close driver server
	I0819 19:53:44.050844  475310 main.go:141] libmachine: (test-preload-247827) Calling .Close
	I0819 19:53:44.051129  475310 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:53:44.051147  475310 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:53:44.051163  475310 main.go:141] libmachine: (test-preload-247827) DBG | Closing plugin on server side
	I0819 19:53:44.051242  475310 main.go:141] libmachine: Making call to close driver server
	I0819 19:53:44.051264  475310 main.go:141] libmachine: (test-preload-247827) Calling .Close
	I0819 19:53:44.051499  475310 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:53:44.051527  475310 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:53:44.051550  475310 main.go:141] libmachine: (test-preload-247827) DBG | Closing plugin on server side
	I0819 19:53:44.058053  475310 main.go:141] libmachine: Making call to close driver server
	I0819 19:53:44.058075  475310 main.go:141] libmachine: (test-preload-247827) Calling .Close
	I0819 19:53:44.058359  475310 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:53:44.058380  475310 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:53:44.058385  475310 main.go:141] libmachine: (test-preload-247827) DBG | Closing plugin on server side
	I0819 19:53:44.060142  475310 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0819 19:53:44.061302  475310 addons.go:510] duration metric: took 1.229879141s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0819 19:53:45.016789  475310 node_ready.go:53] node "test-preload-247827" has status "Ready":"False"
	I0819 19:53:47.516940  475310 node_ready.go:53] node "test-preload-247827" has status "Ready":"False"
	I0819 19:53:49.517480  475310 node_ready.go:53] node "test-preload-247827" has status "Ready":"False"
	I0819 19:53:50.017613  475310 node_ready.go:49] node "test-preload-247827" has status "Ready":"True"
	I0819 19:53:50.017643  475310 node_ready.go:38] duration metric: took 7.004988586s for node "test-preload-247827" to be "Ready" ...
	I0819 19:53:50.017654  475310 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:53:50.023611  475310 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-mqbdv" in "kube-system" namespace to be "Ready" ...
	I0819 19:53:50.030284  475310 pod_ready.go:93] pod "coredns-6d4b75cb6d-mqbdv" in "kube-system" namespace has status "Ready":"True"
	I0819 19:53:50.030315  475310 pod_ready.go:82] duration metric: took 6.676453ms for pod "coredns-6d4b75cb6d-mqbdv" in "kube-system" namespace to be "Ready" ...
	I0819 19:53:50.030327  475310 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-247827" in "kube-system" namespace to be "Ready" ...
	I0819 19:53:50.037309  475310 pod_ready.go:93] pod "etcd-test-preload-247827" in "kube-system" namespace has status "Ready":"True"
	I0819 19:53:50.037331  475310 pod_ready.go:82] duration metric: took 6.997674ms for pod "etcd-test-preload-247827" in "kube-system" namespace to be "Ready" ...
	I0819 19:53:50.037341  475310 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-247827" in "kube-system" namespace to be "Ready" ...
	I0819 19:53:52.044732  475310 pod_ready.go:103] pod "kube-apiserver-test-preload-247827" in "kube-system" namespace has status "Ready":"False"
	I0819 19:53:52.544834  475310 pod_ready.go:93] pod "kube-apiserver-test-preload-247827" in "kube-system" namespace has status "Ready":"True"
	I0819 19:53:52.544862  475310 pod_ready.go:82] duration metric: took 2.507515016s for pod "kube-apiserver-test-preload-247827" in "kube-system" namespace to be "Ready" ...
	I0819 19:53:52.544873  475310 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-247827" in "kube-system" namespace to be "Ready" ...
	I0819 19:53:54.551501  475310 pod_ready.go:103] pod "kube-controller-manager-test-preload-247827" in "kube-system" namespace has status "Ready":"False"
	I0819 19:53:56.551636  475310 pod_ready.go:93] pod "kube-controller-manager-test-preload-247827" in "kube-system" namespace has status "Ready":"True"
	I0819 19:53:56.551666  475310 pod_ready.go:82] duration metric: took 4.006786475s for pod "kube-controller-manager-test-preload-247827" in "kube-system" namespace to be "Ready" ...
	I0819 19:53:56.551677  475310 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sczp2" in "kube-system" namespace to be "Ready" ...
	I0819 19:53:56.557037  475310 pod_ready.go:93] pod "kube-proxy-sczp2" in "kube-system" namespace has status "Ready":"True"
	I0819 19:53:56.557067  475310 pod_ready.go:82] duration metric: took 5.382802ms for pod "kube-proxy-sczp2" in "kube-system" namespace to be "Ready" ...
	I0819 19:53:56.557092  475310 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-247827" in "kube-system" namespace to be "Ready" ...
	I0819 19:53:56.562339  475310 pod_ready.go:93] pod "kube-scheduler-test-preload-247827" in "kube-system" namespace has status "Ready":"True"
	I0819 19:53:56.562371  475310 pod_ready.go:82] duration metric: took 5.270069ms for pod "kube-scheduler-test-preload-247827" in "kube-system" namespace to be "Ready" ...
	I0819 19:53:56.562386  475310 pod_ready.go:39] duration metric: took 6.544720753s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:53:56.562405  475310 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:53:56.562467  475310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:53:56.577195  475310 api_server.go:72] duration metric: took 13.745806219s to wait for apiserver process to appear ...
	I0819 19:53:56.577221  475310 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:53:56.577241  475310 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0819 19:53:56.582248  475310 api_server.go:279] https://192.168.39.61:8443/healthz returned 200:
	ok
	I0819 19:53:56.583271  475310 api_server.go:141] control plane version: v1.24.4
	I0819 19:53:56.583299  475310 api_server.go:131] duration metric: took 6.070351ms to wait for apiserver health ...
	I0819 19:53:56.583328  475310 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:53:56.588933  475310 system_pods.go:59] 7 kube-system pods found
	I0819 19:53:56.588965  475310 system_pods.go:61] "coredns-6d4b75cb6d-mqbdv" [ba618be1-cf31-4ece-8851-839c008cb635] Running
	I0819 19:53:56.588972  475310 system_pods.go:61] "etcd-test-preload-247827" [087fba43-de03-4f47-87ad-331ce7fe7f1b] Running
	I0819 19:53:56.588977  475310 system_pods.go:61] "kube-apiserver-test-preload-247827" [7aa6f5da-0f23-41ef-8c7c-fe60bcd65402] Running
	I0819 19:53:56.588982  475310 system_pods.go:61] "kube-controller-manager-test-preload-247827" [f701f8e2-8fd0-4463-9639-850ec4b2fa20] Running
	I0819 19:53:56.588986  475310 system_pods.go:61] "kube-proxy-sczp2" [d43e1f06-2cdb-4244-bc4b-79f998d6973b] Running
	I0819 19:53:56.588991  475310 system_pods.go:61] "kube-scheduler-test-preload-247827" [fbe31eaa-a82a-4b07-bbd8-578a8ab94c57] Running
	I0819 19:53:56.589002  475310 system_pods.go:61] "storage-provisioner" [367930bf-0d52-488e-9cfc-312f33a8674e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 19:53:56.589011  475310 system_pods.go:74] duration metric: took 5.675835ms to wait for pod list to return data ...
	I0819 19:53:56.589024  475310 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:53:56.592033  475310 default_sa.go:45] found service account: "default"
	I0819 19:53:56.592067  475310 default_sa.go:55] duration metric: took 3.035401ms for default service account to be created ...
	I0819 19:53:56.592078  475310 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:53:56.620138  475310 system_pods.go:86] 7 kube-system pods found
	I0819 19:53:56.620171  475310 system_pods.go:89] "coredns-6d4b75cb6d-mqbdv" [ba618be1-cf31-4ece-8851-839c008cb635] Running
	I0819 19:53:56.620177  475310 system_pods.go:89] "etcd-test-preload-247827" [087fba43-de03-4f47-87ad-331ce7fe7f1b] Running
	I0819 19:53:56.620184  475310 system_pods.go:89] "kube-apiserver-test-preload-247827" [7aa6f5da-0f23-41ef-8c7c-fe60bcd65402] Running
	I0819 19:53:56.620189  475310 system_pods.go:89] "kube-controller-manager-test-preload-247827" [f701f8e2-8fd0-4463-9639-850ec4b2fa20] Running
	I0819 19:53:56.620192  475310 system_pods.go:89] "kube-proxy-sczp2" [d43e1f06-2cdb-4244-bc4b-79f998d6973b] Running
	I0819 19:53:56.620196  475310 system_pods.go:89] "kube-scheduler-test-preload-247827" [fbe31eaa-a82a-4b07-bbd8-578a8ab94c57] Running
	I0819 19:53:56.620201  475310 system_pods.go:89] "storage-provisioner" [367930bf-0d52-488e-9cfc-312f33a8674e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 19:53:56.620209  475310 system_pods.go:126] duration metric: took 28.124452ms to wait for k8s-apps to be running ...
	I0819 19:53:56.620219  475310 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:53:56.620268  475310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:53:56.634721  475310 system_svc.go:56] duration metric: took 14.487696ms WaitForService to wait for kubelet
	I0819 19:53:56.634760  475310 kubeadm.go:582] duration metric: took 13.803378247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:53:56.634786  475310 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:53:56.817759  475310 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:53:56.817794  475310 node_conditions.go:123] node cpu capacity is 2
	I0819 19:53:56.817809  475310 node_conditions.go:105] duration metric: took 183.016476ms to run NodePressure ...
	I0819 19:53:56.817824  475310 start.go:241] waiting for startup goroutines ...
	I0819 19:53:56.817832  475310 start.go:246] waiting for cluster config update ...
	I0819 19:53:56.817845  475310 start.go:255] writing updated cluster config ...
	I0819 19:53:56.818235  475310 ssh_runner.go:195] Run: rm -f paused
	I0819 19:53:56.868209  475310 start.go:600] kubectl: 1.31.0, cluster: 1.24.4 (minor skew: 7)
	I0819 19:53:56.870063  475310 out.go:201] 
	W0819 19:53:56.871353  475310 out.go:270] ! /usr/local/bin/kubectl is version 1.31.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0819 19:53:56.872517  475310 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0819 19:53:56.873796  475310 out.go:177] * Done! kubectl is now configured to use "test-preload-247827" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.749753148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097237749730937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af51d31d-423d-4f8b-8b17-b946f1b76686 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.750246271Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f442a11e-a272-475f-9a1e-db96750a37c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.750337313Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f442a11e-a272-475f-9a1e-db96750a37c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.750533818Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:223f4e427824395978fc2aba02a80f845d7fb62a7d0f2c6fcf65d038c6a5ce1f,PodSandboxId:b9da03b9b6fce26f2e934ca7b810e0d1a4e5fe9928a25f3e1dd4ab39147a09b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724097228268867319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-mqbdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba618be1-cf31-4ece-8851-839c008cb635,},Annotations:map[string]string{io.kubernetes.container.hash: 5034e344,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082d659946d40c6af21ac2000fba30e8c8079da4a1ade7ecec3e0848450d8a96,PodSandboxId:1f4fde3d03f0f46f5a838e30dcd0e2c00198f9adb7f41de9b3f5d5b199cd07b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724097221267027549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 367930bf-0d52-488e-9cfc-312f33a8674e,},Annotations:map[string]string{io.kubernetes.container.hash: 1ef0fdcb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aecd0af03ea08cc62353b2ef4022a8772c5bae77cbe84956f5a79600b1108216,PodSandboxId:2f26d1cd445e369e24dae0e7d7624a484f568fa611edb1669c252d9737309fa2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724097221096788535,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sczp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d43
e1f06-2cdb-4244-bc4b-79f998d6973b,},Annotations:map[string]string{io.kubernetes.container.hash: 16d7ecd6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d30ebfa5d887327bcde084009f9f3e13e81520c69a34bd684cea21bbd2a8e20f,PodSandboxId:a8d041d56a0627bb3ae82bbc7163ff5d7bcf154e914de733f8316c3f0c072bae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724097219268508547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 3d72497c73e653e57759258f1346da7e,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa24db4848d6f2d829fcd552758f4cd197726c3d19f619618f4d3f418df6b81,PodSandboxId:a6ca66518d6ebac01ef64e80d1284731b47c47c24aade6f703554bb92d2dab74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724097216242810459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 69e3da336b1c0c12bafafe72119584f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8e9c603,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05f129640f148e5cb943cff24b4da51a187a75d871d92a5c0d3676654bc573bb,PodSandboxId:5c351eeb402c2a3235b83691b249105542f9ee09dbddaaed2275b94070be94cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724097214441509743,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad39874cc92b56b14af8142bb687bbd,},
Annotations:map[string]string{io.kubernetes.container.hash: b9c283f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89bb7aa30eba56590f2b1501926c0d0f7e57a99b2c0afe45a9032a6e923da679,PodSandboxId:a6ca66518d6ebac01ef64e80d1284731b47c47c24aade6f703554bb92d2dab74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1724097194744005940,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69e3da336b1c0c12bafafe72119584f1,},Annotations:
map[string]string{io.kubernetes.container.hash: f8e9c603,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28a14b66f3bcc00dfbf26a1682e2997de6ef190e3a18c9fda173f392b756de1,PodSandboxId:a8d041d56a0627bb3ae82bbc7163ff5d7bcf154e914de733f8316c3f0c072bae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1724097194758113394,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72497c73e653e57759258f1346da7e
,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a2de6fad22970c3c7696611ae659973628d2a8df76400867fe6f98b9aa784f0,PodSandboxId:a1c5318632ff44ff6eef245af45d450e0bc9457d85758348cd6df29aab261340,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724097194723686121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b8bc3d948501b85b904632c610ab61c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f442a11e-a272-475f-9a1e-db96750a37c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.786932616Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a028d499-3338-4911-9a17-f03f32936b5b name=/runtime.v1.RuntimeService/Version
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.787021599Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a028d499-3338-4911-9a17-f03f32936b5b name=/runtime.v1.RuntimeService/Version
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.789475494Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e58bc4b-73f6-4512-8304-137fbfd0f479 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.790033176Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097237790010910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e58bc4b-73f6-4512-8304-137fbfd0f479 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.790563461Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2af73b3c-e9d9-496f-9d4d-61ffa7ae008f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.790627532Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2af73b3c-e9d9-496f-9d4d-61ffa7ae008f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.790906192Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:223f4e427824395978fc2aba02a80f845d7fb62a7d0f2c6fcf65d038c6a5ce1f,PodSandboxId:b9da03b9b6fce26f2e934ca7b810e0d1a4e5fe9928a25f3e1dd4ab39147a09b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724097228268867319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-mqbdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba618be1-cf31-4ece-8851-839c008cb635,},Annotations:map[string]string{io.kubernetes.container.hash: 5034e344,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082d659946d40c6af21ac2000fba30e8c8079da4a1ade7ecec3e0848450d8a96,PodSandboxId:1f4fde3d03f0f46f5a838e30dcd0e2c00198f9adb7f41de9b3f5d5b199cd07b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724097221267027549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 367930bf-0d52-488e-9cfc-312f33a8674e,},Annotations:map[string]string{io.kubernetes.container.hash: 1ef0fdcb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aecd0af03ea08cc62353b2ef4022a8772c5bae77cbe84956f5a79600b1108216,PodSandboxId:2f26d1cd445e369e24dae0e7d7624a484f568fa611edb1669c252d9737309fa2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724097221096788535,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sczp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d43
e1f06-2cdb-4244-bc4b-79f998d6973b,},Annotations:map[string]string{io.kubernetes.container.hash: 16d7ecd6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d30ebfa5d887327bcde084009f9f3e13e81520c69a34bd684cea21bbd2a8e20f,PodSandboxId:a8d041d56a0627bb3ae82bbc7163ff5d7bcf154e914de733f8316c3f0c072bae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724097219268508547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 3d72497c73e653e57759258f1346da7e,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa24db4848d6f2d829fcd552758f4cd197726c3d19f619618f4d3f418df6b81,PodSandboxId:a6ca66518d6ebac01ef64e80d1284731b47c47c24aade6f703554bb92d2dab74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724097216242810459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 69e3da336b1c0c12bafafe72119584f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8e9c603,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05f129640f148e5cb943cff24b4da51a187a75d871d92a5c0d3676654bc573bb,PodSandboxId:5c351eeb402c2a3235b83691b249105542f9ee09dbddaaed2275b94070be94cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724097214441509743,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad39874cc92b56b14af8142bb687bbd,},
Annotations:map[string]string{io.kubernetes.container.hash: b9c283f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89bb7aa30eba56590f2b1501926c0d0f7e57a99b2c0afe45a9032a6e923da679,PodSandboxId:a6ca66518d6ebac01ef64e80d1284731b47c47c24aade6f703554bb92d2dab74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1724097194744005940,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69e3da336b1c0c12bafafe72119584f1,},Annotations:
map[string]string{io.kubernetes.container.hash: f8e9c603,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28a14b66f3bcc00dfbf26a1682e2997de6ef190e3a18c9fda173f392b756de1,PodSandboxId:a8d041d56a0627bb3ae82bbc7163ff5d7bcf154e914de733f8316c3f0c072bae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1724097194758113394,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72497c73e653e57759258f1346da7e
,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a2de6fad22970c3c7696611ae659973628d2a8df76400867fe6f98b9aa784f0,PodSandboxId:a1c5318632ff44ff6eef245af45d450e0bc9457d85758348cd6df29aab261340,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724097194723686121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b8bc3d948501b85b904632c610ab61c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2af73b3c-e9d9-496f-9d4d-61ffa7ae008f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.832958686Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=05dc7b6a-f8f1-4efb-a05e-f8dcca6a60c2 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.833048388Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=05dc7b6a-f8f1-4efb-a05e-f8dcca6a60c2 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.834202255Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=711ee6ea-0acf-43a0-a07f-67163bae1d02 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.834700454Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097237834678641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=711ee6ea-0acf-43a0-a07f-67163bae1d02 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.835127758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9126887-4f45-4e15-bef3-c55c8b961ec7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.835197369Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9126887-4f45-4e15-bef3-c55c8b961ec7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.835426726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:223f4e427824395978fc2aba02a80f845d7fb62a7d0f2c6fcf65d038c6a5ce1f,PodSandboxId:b9da03b9b6fce26f2e934ca7b810e0d1a4e5fe9928a25f3e1dd4ab39147a09b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724097228268867319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-mqbdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba618be1-cf31-4ece-8851-839c008cb635,},Annotations:map[string]string{io.kubernetes.container.hash: 5034e344,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082d659946d40c6af21ac2000fba30e8c8079da4a1ade7ecec3e0848450d8a96,PodSandboxId:1f4fde3d03f0f46f5a838e30dcd0e2c00198f9adb7f41de9b3f5d5b199cd07b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724097221267027549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 367930bf-0d52-488e-9cfc-312f33a8674e,},Annotations:map[string]string{io.kubernetes.container.hash: 1ef0fdcb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aecd0af03ea08cc62353b2ef4022a8772c5bae77cbe84956f5a79600b1108216,PodSandboxId:2f26d1cd445e369e24dae0e7d7624a484f568fa611edb1669c252d9737309fa2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724097221096788535,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sczp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d43
e1f06-2cdb-4244-bc4b-79f998d6973b,},Annotations:map[string]string{io.kubernetes.container.hash: 16d7ecd6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d30ebfa5d887327bcde084009f9f3e13e81520c69a34bd684cea21bbd2a8e20f,PodSandboxId:a8d041d56a0627bb3ae82bbc7163ff5d7bcf154e914de733f8316c3f0c072bae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724097219268508547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 3d72497c73e653e57759258f1346da7e,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa24db4848d6f2d829fcd552758f4cd197726c3d19f619618f4d3f418df6b81,PodSandboxId:a6ca66518d6ebac01ef64e80d1284731b47c47c24aade6f703554bb92d2dab74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724097216242810459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 69e3da336b1c0c12bafafe72119584f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8e9c603,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05f129640f148e5cb943cff24b4da51a187a75d871d92a5c0d3676654bc573bb,PodSandboxId:5c351eeb402c2a3235b83691b249105542f9ee09dbddaaed2275b94070be94cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724097214441509743,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad39874cc92b56b14af8142bb687bbd,},
Annotations:map[string]string{io.kubernetes.container.hash: b9c283f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89bb7aa30eba56590f2b1501926c0d0f7e57a99b2c0afe45a9032a6e923da679,PodSandboxId:a6ca66518d6ebac01ef64e80d1284731b47c47c24aade6f703554bb92d2dab74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1724097194744005940,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69e3da336b1c0c12bafafe72119584f1,},Annotations:
map[string]string{io.kubernetes.container.hash: f8e9c603,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28a14b66f3bcc00dfbf26a1682e2997de6ef190e3a18c9fda173f392b756de1,PodSandboxId:a8d041d56a0627bb3ae82bbc7163ff5d7bcf154e914de733f8316c3f0c072bae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1724097194758113394,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72497c73e653e57759258f1346da7e
,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a2de6fad22970c3c7696611ae659973628d2a8df76400867fe6f98b9aa784f0,PodSandboxId:a1c5318632ff44ff6eef245af45d450e0bc9457d85758348cd6df29aab261340,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724097194723686121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b8bc3d948501b85b904632c610ab61c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9126887-4f45-4e15-bef3-c55c8b961ec7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.867502046Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e6cb89d-f56c-45a5-8c55-da2f2df91fca name=/runtime.v1.RuntimeService/Version
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.867590344Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e6cb89d-f56c-45a5-8c55-da2f2df91fca name=/runtime.v1.RuntimeService/Version
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.868659422Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94579bcf-5d3c-4052-aa99-3daacbfdbd0f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.869084849Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097237869057816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94579bcf-5d3c-4052-aa99-3daacbfdbd0f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.869580050Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3d62f666-5d91-4025-ab44-e98ab842e4ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.869643635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3d62f666-5d91-4025-ab44-e98ab842e4ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:53:57 test-preload-247827 crio[678]: time="2024-08-19 19:53:57.869845964Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:223f4e427824395978fc2aba02a80f845d7fb62a7d0f2c6fcf65d038c6a5ce1f,PodSandboxId:b9da03b9b6fce26f2e934ca7b810e0d1a4e5fe9928a25f3e1dd4ab39147a09b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724097228268867319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-mqbdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba618be1-cf31-4ece-8851-839c008cb635,},Annotations:map[string]string{io.kubernetes.container.hash: 5034e344,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082d659946d40c6af21ac2000fba30e8c8079da4a1ade7ecec3e0848450d8a96,PodSandboxId:1f4fde3d03f0f46f5a838e30dcd0e2c00198f9adb7f41de9b3f5d5b199cd07b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724097221267027549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 367930bf-0d52-488e-9cfc-312f33a8674e,},Annotations:map[string]string{io.kubernetes.container.hash: 1ef0fdcb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aecd0af03ea08cc62353b2ef4022a8772c5bae77cbe84956f5a79600b1108216,PodSandboxId:2f26d1cd445e369e24dae0e7d7624a484f568fa611edb1669c252d9737309fa2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724097221096788535,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sczp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d43
e1f06-2cdb-4244-bc4b-79f998d6973b,},Annotations:map[string]string{io.kubernetes.container.hash: 16d7ecd6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d30ebfa5d887327bcde084009f9f3e13e81520c69a34bd684cea21bbd2a8e20f,PodSandboxId:a8d041d56a0627bb3ae82bbc7163ff5d7bcf154e914de733f8316c3f0c072bae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724097219268508547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 3d72497c73e653e57759258f1346da7e,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa24db4848d6f2d829fcd552758f4cd197726c3d19f619618f4d3f418df6b81,PodSandboxId:a6ca66518d6ebac01ef64e80d1284731b47c47c24aade6f703554bb92d2dab74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724097216242810459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 69e3da336b1c0c12bafafe72119584f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8e9c603,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05f129640f148e5cb943cff24b4da51a187a75d871d92a5c0d3676654bc573bb,PodSandboxId:5c351eeb402c2a3235b83691b249105542f9ee09dbddaaed2275b94070be94cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724097214441509743,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad39874cc92b56b14af8142bb687bbd,},
Annotations:map[string]string{io.kubernetes.container.hash: b9c283f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89bb7aa30eba56590f2b1501926c0d0f7e57a99b2c0afe45a9032a6e923da679,PodSandboxId:a6ca66518d6ebac01ef64e80d1284731b47c47c24aade6f703554bb92d2dab74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1724097194744005940,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69e3da336b1c0c12bafafe72119584f1,},Annotations:
map[string]string{io.kubernetes.container.hash: f8e9c603,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28a14b66f3bcc00dfbf26a1682e2997de6ef190e3a18c9fda173f392b756de1,PodSandboxId:a8d041d56a0627bb3ae82bbc7163ff5d7bcf154e914de733f8316c3f0c072bae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1724097194758113394,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72497c73e653e57759258f1346da7e
,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a2de6fad22970c3c7696611ae659973628d2a8df76400867fe6f98b9aa784f0,PodSandboxId:a1c5318632ff44ff6eef245af45d450e0bc9457d85758348cd6df29aab261340,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724097194723686121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-247827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b8bc3d948501b85b904632c610ab61c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3d62f666-5d91-4025-ab44-e98ab842e4ea name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	223f4e4278243       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   9 seconds ago       Running             coredns                   1                   b9da03b9b6fce       coredns-6d4b75cb6d-mqbdv
	082d659946d40       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Exited              storage-provisioner       2                   1f4fde3d03f0f       storage-provisioner
	aecd0af03ea08       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   16 seconds ago      Running             kube-proxy                1                   2f26d1cd445e3       kube-proxy-sczp2
	d30ebfa5d8873       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   18 seconds ago      Running             kube-controller-manager   2                   a8d041d56a062       kube-controller-manager-test-preload-247827
	ffa24db4848d6       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            2                   a6ca66518d6eb       kube-apiserver-test-preload-247827
	05f129640f148       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   23 seconds ago      Running             etcd                      1                   5c351eeb402c2       etcd-test-preload-247827
	e28a14b66f3bc       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   43 seconds ago      Exited              kube-controller-manager   1                   a8d041d56a062       kube-controller-manager-test-preload-247827
	89bb7aa30eba5       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   43 seconds ago      Exited              kube-apiserver            1                   a6ca66518d6eb       kube-apiserver-test-preload-247827
	7a2de6fad2297       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   43 seconds ago      Running             kube-scheduler            1                   a1c5318632ff4       kube-scheduler-test-preload-247827
	
	
	==> coredns [223f4e427824395978fc2aba02a80f845d7fb62a7d0f2c6fcf65d038c6a5ce1f] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:32816 - 15308 "HINFO IN 697234518640450340.8015277014913115320. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.02199775s
	
	
	==> describe nodes <==
	Name:               test-preload-247827
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-247827
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=test-preload-247827
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T19_52_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:52:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-247827
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:53:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:53:49 +0000   Mon, 19 Aug 2024 19:52:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:53:49 +0000   Mon, 19 Aug 2024 19:52:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:53:49 +0000   Mon, 19 Aug 2024 19:52:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:53:49 +0000   Mon, 19 Aug 2024 19:53:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    test-preload-247827
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 198cdc21d9624f05b3cb8f98d5b5ad35
	  System UUID:                198cdc21-d962-4f05-b3cb-8f98d5b5ad35
	  Boot ID:                    305d1872-9058-444c-81dc-0c6f63fde478
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-mqbdv                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     99s
	  kube-system                 etcd-test-preload-247827                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         112s
	  kube-system                 kube-apiserver-test-preload-247827             250m (12%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-test-preload-247827    200m (10%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-sczp2                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-scheduler-test-preload-247827             100m (5%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16s                kube-proxy       
	  Normal  Starting                 97s                kube-proxy       
	  Normal  Starting                 112s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  112s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node test-preload-247827 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node test-preload-247827 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s               kubelet          Node test-preload-247827 status is now: NodeHasSufficientPID
	  Normal  NodeReady                102s               kubelet          Node test-preload-247827 status is now: NodeReady
	  Normal  RegisteredNode           100s               node-controller  Node test-preload-247827 event: Registered Node test-preload-247827 in Controller
	  Normal  Starting                 44s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node test-preload-247827 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node test-preload-247827 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x7 over 44s)  kubelet          Node test-preload-247827 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  44s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                 node-controller  Node test-preload-247827 event: Registered Node test-preload-247827 in Controller
	
	
	==> dmesg <==
	[Aug19 19:52] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047776] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036091] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.740010] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.929806] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.426381] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.962357] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.060496] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053454] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.185337] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.117714] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.281154] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[Aug19 19:53] systemd-fstab-generator[994]: Ignoring "noauto" option for root device
	[  +0.060101] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.275499] systemd-fstab-generator[1124]: Ignoring "noauto" option for root device
	[  +5.592151] kauditd_printk_skb: 95 callbacks suppressed
	[ +21.327681] kauditd_printk_skb: 10 callbacks suppressed
	[  +2.104952] systemd-fstab-generator[1930]: Ignoring "noauto" option for root device
	[  +5.202729] kauditd_printk_skb: 56 callbacks suppressed
	
	
	==> etcd [05f129640f148e5cb943cff24b4da51a187a75d871d92a5c0d3676654bc573bb] <==
	{"level":"info","ts":"2024-08-19T19:53:34.573Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"be6e2cf5fb13c","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-19T19:53:34.574Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-19T19:53:34.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c switched to configuration voters=(3350086559969596)"}
	{"level":"info","ts":"2024-08-19T19:53:34.575Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"855213fb0218a9ad","local-member-id":"be6e2cf5fb13c","added-peer-id":"be6e2cf5fb13c","added-peer-peer-urls":["https://192.168.39.61:2380"]}
	{"level":"info","ts":"2024-08-19T19:53:34.576Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"855213fb0218a9ad","local-member-id":"be6e2cf5fb13c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:53:34.576Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:53:34.577Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T19:53:34.577Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"be6e2cf5fb13c","initial-advertise-peer-urls":["https://192.168.39.61:2380"],"listen-peer-urls":["https://192.168.39.61:2380"],"advertise-client-urls":["https://192.168.39.61:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.61:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T19:53:34.577Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T19:53:34.577Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.61:2380"}
	{"level":"info","ts":"2024-08-19T19:53:34.577Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.61:2380"}
	{"level":"info","ts":"2024-08-19T19:53:35.962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T19:53:35.962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T19:53:35.962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c received MsgPreVoteResp from be6e2cf5fb13c at term 2"}
	{"level":"info","ts":"2024-08-19T19:53:35.962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T19:53:35.962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c received MsgVoteResp from be6e2cf5fb13c at term 3"}
	{"level":"info","ts":"2024-08-19T19:53:35.962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became leader at term 3"}
	{"level":"info","ts":"2024-08-19T19:53:35.962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: be6e2cf5fb13c elected leader be6e2cf5fb13c at term 3"}
	{"level":"info","ts":"2024-08-19T19:53:35.964Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"be6e2cf5fb13c","local-member-attributes":"{Name:test-preload-247827 ClientURLs:[https://192.168.39.61:2379]}","request-path":"/0/members/be6e2cf5fb13c/attributes","cluster-id":"855213fb0218a9ad","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T19:53:35.964Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:53:35.965Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:53:35.966Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T19:53:35.966Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T19:53:35.966Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T19:53:35.967Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.61:2379"}
	
	
	==> kernel <==
	 19:53:58 up 1 min,  0 users,  load average: 0.56, 0.17, 0.06
	Linux test-preload-247827 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [89bb7aa30eba56590f2b1501926c0d0f7e57a99b2c0afe45a9032a6e923da679] <==
	I0819 19:53:15.441591       1 server.go:558] external host was not specified, using 192.168.39.61
	I0819 19:53:15.442631       1 server.go:158] Version: v1.24.4
	I0819 19:53:15.442742       1 server.go:160] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:53:15.776448       1 shared_informer.go:255] Waiting for caches to sync for node_authorizer
	I0819 19:53:15.779820       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 19:53:15.779891       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 19:53:15.781006       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 19:53:15.781061       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	W0819 19:53:15.794692       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0819 19:53:16.739509       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0819 19:53:16.795571       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0819 19:53:17.740044       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0819 19:53:18.381043       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0819 19:53:19.084423       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0819 19:53:21.248290       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0819 19:53:21.903755       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0819 19:53:25.108917       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0819 19:53:26.045856       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0819 19:53:30.505586       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0819 19:53:31.336289       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	E0819 19:53:35.789555       1 run.go:74] "command failed" err="context deadline exceeded"
	
	
	==> kube-apiserver [ffa24db4848d6f2d829fcd552758f4cd197726c3d19f619618f4d3f418df6b81] <==
	I0819 19:53:39.115859       1 controller.go:85] Starting OpenAPI V3 controller
	I0819 19:53:39.115889       1 naming_controller.go:291] Starting NamingConditionController
	I0819 19:53:39.115907       1 establishing_controller.go:76] Starting EstablishingController
	I0819 19:53:39.115919       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0819 19:53:39.115944       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0819 19:53:39.115976       1 crd_finalizer.go:266] Starting CRDFinalizer
	E0819 19:53:39.216219       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0819 19:53:39.240003       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0819 19:53:39.275508       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0819 19:53:39.275995       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 19:53:39.287203       1 cache.go:39] Caches are synced for autoregister controller
	I0819 19:53:39.287511       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0819 19:53:39.287567       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0819 19:53:39.288785       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0819 19:53:39.779595       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0819 19:53:40.085134       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 19:53:40.311228       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 19:53:40.770243       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0819 19:53:40.779623       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0819 19:53:40.851754       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0819 19:53:40.873263       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 19:53:40.888498       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 19:53:41.371802       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0819 19:53:52.486132       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 19:53:52.585034       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d30ebfa5d887327bcde084009f9f3e13e81520c69a34bd684cea21bbd2a8e20f] <==
	I0819 19:53:52.496617       1 shared_informer.go:262] Caches are synced for GC
	I0819 19:53:52.497834       1 shared_informer.go:262] Caches are synced for service account
	I0819 19:53:52.502027       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0819 19:53:52.502107       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0819 19:53:52.502141       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0819 19:53:52.502302       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0819 19:53:52.505555       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0819 19:53:52.552951       1 shared_informer.go:262] Caches are synced for persistent volume
	I0819 19:53:52.555371       1 shared_informer.go:262] Caches are synced for attach detach
	I0819 19:53:52.556385       1 shared_informer.go:262] Caches are synced for expand
	I0819 19:53:52.619954       1 shared_informer.go:262] Caches are synced for PV protection
	I0819 19:53:52.624283       1 shared_informer.go:262] Caches are synced for deployment
	I0819 19:53:52.645545       1 shared_informer.go:262] Caches are synced for taint
	I0819 19:53:52.645761       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0819 19:53:52.645779       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0819 19:53:52.646009       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-247827. Assuming now as a timestamp.
	I0819 19:53:52.646070       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0819 19:53:52.646399       1 event.go:294] "Event occurred" object="test-preload-247827" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-247827 event: Registered Node test-preload-247827 in Controller"
	I0819 19:53:52.681690       1 shared_informer.go:262] Caches are synced for disruption
	I0819 19:53:52.681772       1 disruption.go:371] Sending events to api server.
	I0819 19:53:52.685444       1 shared_informer.go:262] Caches are synced for resource quota
	I0819 19:53:52.728178       1 shared_informer.go:262] Caches are synced for resource quota
	I0819 19:53:53.134619       1 shared_informer.go:262] Caches are synced for garbage collector
	I0819 19:53:53.143027       1 shared_informer.go:262] Caches are synced for garbage collector
	I0819 19:53:53.143060       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-controller-manager [e28a14b66f3bcc00dfbf26a1682e2997de6ef190e3a18c9fda173f392b756de1] <==
		/usr/local/go/src/bytes/buffer.go:204 +0x98
	crypto/tls.(*Conn).readFromUntil(0xc000199500, {0x4d02200?, 0xc0001142a8}, 0x902?)
		/usr/local/go/src/crypto/tls/conn.go:807 +0xe5
	crypto/tls.(*Conn).readRecordOrCCS(0xc000199500, 0x0)
		/usr/local/go/src/crypto/tls/conn.go:614 +0x116
	crypto/tls.(*Conn).readRecord(...)
		/usr/local/go/src/crypto/tls/conn.go:582
	crypto/tls.(*Conn).Read(0xc000199500, {0xc000b06000, 0x1000, 0x91a200?})
		/usr/local/go/src/crypto/tls/conn.go:1285 +0x16f
	bufio.(*Reader).Read(0xc0007296e0, {0xc0001772a0, 0x9, 0x936b82?})
		/usr/local/go/src/bufio/bufio.go:236 +0x1b4
	io.ReadAtLeast({0x4cf9b00, 0xc0007296e0}, {0xc0001772a0, 0x9, 0x9}, 0x9)
		/usr/local/go/src/io/io.go:331 +0x9a
	io.ReadFull(...)
		/usr/local/go/src/io/io.go:350
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader({0xc0001772a0?, 0x9?, 0xc0019fd140?}, {0x4cf9b00?, 0xc0007296e0?})
		vendor/golang.org/x/net/http2/frame.go:237 +0x6e
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000177260)
		vendor/golang.org/x/net/http2/frame.go:498 +0x95
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0009a5f98)
		vendor/golang.org/x/net/http2/transport.go:2101 +0x130
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc0006f8a80)
		vendor/golang.org/x/net/http2/transport.go:1997 +0x6f
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
		vendor/golang.org/x/net/http2/transport.go:725 +0xa65
	
	
	==> kube-proxy [aecd0af03ea08cc62353b2ef4022a8772c5bae77cbe84956f5a79600b1108216] <==
	I0819 19:53:41.295216       1 node.go:163] Successfully retrieved node IP: 192.168.39.61
	I0819 19:53:41.295384       1 server_others.go:138] "Detected node IP" address="192.168.39.61"
	I0819 19:53:41.295409       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0819 19:53:41.358290       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0819 19:53:41.358418       1 server_others.go:206] "Using iptables Proxier"
	I0819 19:53:41.358881       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0819 19:53:41.359646       1 server.go:661] "Version info" version="v1.24.4"
	I0819 19:53:41.359688       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:53:41.361533       1 config.go:317] "Starting service config controller"
	I0819 19:53:41.361810       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0819 19:53:41.361864       1 config.go:226] "Starting endpoint slice config controller"
	I0819 19:53:41.361882       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0819 19:53:41.362889       1 config.go:444] "Starting node config controller"
	I0819 19:53:41.362926       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0819 19:53:41.462612       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0819 19:53:41.462741       1 shared_informer.go:262] Caches are synced for service config
	I0819 19:53:41.463515       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [7a2de6fad22970c3c7696611ae659973628d2a8df76400867fe6f98b9aa784f0] <==
	W0819 19:53:39.205458       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 19:53:39.205539       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0819 19:53:39.205598       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 19:53:39.205624       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0819 19:53:39.212421       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 19:53:39.213414       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0819 19:53:39.214464       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 19:53:39.214509       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0819 19:53:39.214554       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 19:53:39.214562       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0819 19:53:39.214600       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 19:53:39.214607       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0819 19:53:39.214640       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 19:53:39.214664       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0819 19:53:39.214701       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 19:53:39.214723       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0819 19:53:39.214759       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 19:53:39.214780       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 19:53:39.214818       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 19:53:39.214839       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0819 19:53:39.214877       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 19:53:39.214897       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0819 19:53:39.214932       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 19:53:39.214981       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0819 19:53:39.270489       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 19:53:40 test-preload-247827 kubelet[1131]: I0819 19:53:40.264865    1131 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba618be1-cf31-4ece-8851-839c008cb635-config-volume\") pod \"coredns-6d4b75cb6d-mqbdv\" (UID: \"ba618be1-cf31-4ece-8851-839c008cb635\") " pod="kube-system/coredns-6d4b75cb6d-mqbdv"
	Aug 19 19:53:40 test-preload-247827 kubelet[1131]: I0819 19:53:40.265083    1131 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d43e1f06-2cdb-4244-bc4b-79f998d6973b-lib-modules\") pod \"kube-proxy-sczp2\" (UID: \"d43e1f06-2cdb-4244-bc4b-79f998d6973b\") " pod="kube-system/kube-proxy-sczp2"
	Aug 19 19:53:40 test-preload-247827 kubelet[1131]: I0819 19:53:40.265183    1131 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/367930bf-0d52-488e-9cfc-312f33a8674e-tmp\") pod \"storage-provisioner\" (UID: \"367930bf-0d52-488e-9cfc-312f33a8674e\") " pod="kube-system/storage-provisioner"
	Aug 19 19:53:40 test-preload-247827 kubelet[1131]: I0819 19:53:40.265277    1131 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rt8d\" (UniqueName: \"kubernetes.io/projected/367930bf-0d52-488e-9cfc-312f33a8674e-kube-api-access-2rt8d\") pod \"storage-provisioner\" (UID: \"367930bf-0d52-488e-9cfc-312f33a8674e\") " pod="kube-system/storage-provisioner"
	Aug 19 19:53:40 test-preload-247827 kubelet[1131]: I0819 19:53:40.265378    1131 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d43e1f06-2cdb-4244-bc4b-79f998d6973b-kube-proxy\") pod \"kube-proxy-sczp2\" (UID: \"d43e1f06-2cdb-4244-bc4b-79f998d6973b\") " pod="kube-system/kube-proxy-sczp2"
	Aug 19 19:53:40 test-preload-247827 kubelet[1131]: I0819 19:53:40.265458    1131 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg9xj\" (UniqueName: \"kubernetes.io/projected/d43e1f06-2cdb-4244-bc4b-79f998d6973b-kube-api-access-sg9xj\") pod \"kube-proxy-sczp2\" (UID: \"d43e1f06-2cdb-4244-bc4b-79f998d6973b\") " pod="kube-system/kube-proxy-sczp2"
	Aug 19 19:53:40 test-preload-247827 kubelet[1131]: I0819 19:53:40.265545    1131 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db9pf\" (UniqueName: \"kubernetes.io/projected/ba618be1-cf31-4ece-8851-839c008cb635-kube-api-access-db9pf\") pod \"coredns-6d4b75cb6d-mqbdv\" (UID: \"ba618be1-cf31-4ece-8851-839c008cb635\") " pod="kube-system/coredns-6d4b75cb6d-mqbdv"
	Aug 19 19:53:40 test-preload-247827 kubelet[1131]: I0819 19:53:40.265633    1131 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d43e1f06-2cdb-4244-bc4b-79f998d6973b-xtables-lock\") pod \"kube-proxy-sczp2\" (UID: \"d43e1f06-2cdb-4244-bc4b-79f998d6973b\") " pod="kube-system/kube-proxy-sczp2"
	Aug 19 19:53:40 test-preload-247827 kubelet[1131]: I0819 19:53:40.265685    1131 reconciler.go:159] "Reconciler: start to sync state"
	Aug 19 19:53:40 test-preload-247827 kubelet[1131]: E0819 19:53:40.369759    1131 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 19 19:53:40 test-preload-247827 kubelet[1131]: E0819 19:53:40.369855    1131 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ba618be1-cf31-4ece-8851-839c008cb635-config-volume podName:ba618be1-cf31-4ece-8851-839c008cb635 nodeName:}" failed. No retries permitted until 2024-08-19 19:53:40.869831797 +0000 UTC m=+26.934237262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ba618be1-cf31-4ece-8851-839c008cb635-config-volume") pod "coredns-6d4b75cb6d-mqbdv" (UID: "ba618be1-cf31-4ece-8851-839c008cb635") : object "kube-system"/"coredns" not registered
	Aug 19 19:53:40 test-preload-247827 kubelet[1131]: E0819 19:53:40.872588    1131 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 19 19:53:40 test-preload-247827 kubelet[1131]: E0819 19:53:40.872664    1131 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ba618be1-cf31-4ece-8851-839c008cb635-config-volume podName:ba618be1-cf31-4ece-8851-839c008cb635 nodeName:}" failed. No retries permitted until 2024-08-19 19:53:41.872644646 +0000 UTC m=+27.937050111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ba618be1-cf31-4ece-8851-839c008cb635-config-volume") pod "coredns-6d4b75cb6d-mqbdv" (UID: "ba618be1-cf31-4ece-8851-839c008cb635") : object "kube-system"/"coredns" not registered
	Aug 19 19:53:41 test-preload-247827 kubelet[1131]: I0819 19:53:41.255392    1131 scope.go:110] "RemoveContainer" containerID="6445c6390335642d0a39869c01651f0b12d73e5ff661e160097736182fd9708c"
	Aug 19 19:53:41 test-preload-247827 kubelet[1131]: E0819 19:53:41.880347    1131 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 19 19:53:41 test-preload-247827 kubelet[1131]: E0819 19:53:41.880472    1131 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ba618be1-cf31-4ece-8851-839c008cb635-config-volume podName:ba618be1-cf31-4ece-8851-839c008cb635 nodeName:}" failed. No retries permitted until 2024-08-19 19:53:43.880455283 +0000 UTC m=+29.944860740 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ba618be1-cf31-4ece-8851-839c008cb635-config-volume") pod "coredns-6d4b75cb6d-mqbdv" (UID: "ba618be1-cf31-4ece-8851-839c008cb635") : object "kube-system"/"coredns" not registered
	Aug 19 19:53:42 test-preload-247827 kubelet[1131]: E0819 19:53:42.160786    1131 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-mqbdv" podUID=ba618be1-cf31-4ece-8851-839c008cb635
	Aug 19 19:53:42 test-preload-247827 kubelet[1131]: I0819 19:53:42.260558    1131 scope.go:110] "RemoveContainer" containerID="082d659946d40c6af21ac2000fba30e8c8079da4a1ade7ecec3e0848450d8a96"
	Aug 19 19:53:42 test-preload-247827 kubelet[1131]: E0819 19:53:42.260705    1131 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(367930bf-0d52-488e-9cfc-312f33a8674e)\"" pod="kube-system/storage-provisioner" podUID=367930bf-0d52-488e-9cfc-312f33a8674e
	Aug 19 19:53:42 test-preload-247827 kubelet[1131]: I0819 19:53:42.260754    1131 scope.go:110] "RemoveContainer" containerID="6445c6390335642d0a39869c01651f0b12d73e5ff661e160097736182fd9708c"
	Aug 19 19:53:43 test-preload-247827 kubelet[1131]: I0819 19:53:43.268627    1131 scope.go:110] "RemoveContainer" containerID="082d659946d40c6af21ac2000fba30e8c8079da4a1ade7ecec3e0848450d8a96"
	Aug 19 19:53:43 test-preload-247827 kubelet[1131]: E0819 19:53:43.268810    1131 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(367930bf-0d52-488e-9cfc-312f33a8674e)\"" pod="kube-system/storage-provisioner" podUID=367930bf-0d52-488e-9cfc-312f33a8674e
	Aug 19 19:53:43 test-preload-247827 kubelet[1131]: E0819 19:53:43.895683    1131 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 19 19:53:43 test-preload-247827 kubelet[1131]: E0819 19:53:43.895789    1131 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ba618be1-cf31-4ece-8851-839c008cb635-config-volume podName:ba618be1-cf31-4ece-8851-839c008cb635 nodeName:}" failed. No retries permitted until 2024-08-19 19:53:47.895771157 +0000 UTC m=+33.960176612 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ba618be1-cf31-4ece-8851-839c008cb635-config-volume") pod "coredns-6d4b75cb6d-mqbdv" (UID: "ba618be1-cf31-4ece-8851-839c008cb635") : object "kube-system"/"coredns" not registered
	Aug 19 19:53:58 test-preload-247827 kubelet[1131]: I0819 19:53:58.161775    1131 scope.go:110] "RemoveContainer" containerID="082d659946d40c6af21ac2000fba30e8c8079da4a1ade7ecec3e0848450d8a96"
	
	
	==> storage-provisioner [082d659946d40c6af21ac2000fba30e8c8079da4a1ade7ecec3e0848450d8a96] <==
	I0819 19:53:41.351625       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0819 19:53:41.354469       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-247827 -n test-preload-247827
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-247827 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-247827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-247827
--- FAIL: TestPreload (186.00s)

                                                
                                    
x
+
TestKubernetesUpgrade (1147.8s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-382787 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0819 19:59:22.029375  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-382787 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m25.849430669s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-382787] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-382787" primary control-plane node in "kubernetes-upgrade-382787" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:59:14.103503  482115 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:59:14.103607  482115 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:59:14.103612  482115 out.go:358] Setting ErrFile to fd 2...
	I0819 19:59:14.103616  482115 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:59:14.103804  482115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:59:14.104386  482115 out.go:352] Setting JSON to false
	I0819 19:59:14.105547  482115 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13305,"bootTime":1724084249,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:59:14.105616  482115 start.go:139] virtualization: kvm guest
	I0819 19:59:14.107858  482115 out.go:177] * [kubernetes-upgrade-382787] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:59:14.109226  482115 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 19:59:14.109283  482115 notify.go:220] Checking for updates...
	I0819 19:59:14.111935  482115 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:59:14.113281  482115 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:59:14.114571  482115 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:59:14.115747  482115 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:59:14.117046  482115 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:59:14.118825  482115 config.go:182] Loaded profile config "NoKubernetes-803941": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0819 19:59:14.118949  482115 config.go:182] Loaded profile config "cert-expiration-228973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:59:14.119034  482115 config.go:182] Loaded profile config "running-upgrade-814149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0819 19:59:14.119134  482115 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 19:59:14.159189  482115 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 19:59:14.160982  482115 start.go:297] selected driver: kvm2
	I0819 19:59:14.161020  482115 start.go:901] validating driver "kvm2" against <nil>
	I0819 19:59:14.161037  482115 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:59:14.162215  482115 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:59:14.162321  482115 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:59:14.180103  482115 install.go:137] /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:59:14.180185  482115 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 19:59:14.180401  482115 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 19:59:14.180445  482115 cni.go:84] Creating CNI manager for ""
	I0819 19:59:14.180455  482115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:59:14.180461  482115 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 19:59:14.180522  482115 start.go:340] cluster config:
	{Name:kubernetes-upgrade-382787 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-382787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:59:14.180608  482115 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:59:14.182479  482115 out.go:177] * Starting "kubernetes-upgrade-382787" primary control-plane node in "kubernetes-upgrade-382787" cluster
	I0819 19:59:14.183763  482115 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 19:59:14.183819  482115 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:59:14.183839  482115 cache.go:56] Caching tarball of preloaded images
	I0819 19:59:14.183952  482115 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:59:14.183964  482115 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 19:59:14.184079  482115 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/config.json ...
	I0819 19:59:14.184107  482115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/config.json: {Name:mk3c72cbeff1c9156fb2ed6a36e7e2fdb7d8d037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:59:14.184289  482115 start.go:360] acquireMachinesLock for kubernetes-upgrade-382787: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:59:14.184326  482115 start.go:364] duration metric: took 19.591µs to acquireMachinesLock for "kubernetes-upgrade-382787"
	I0819 19:59:14.184352  482115 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-382787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-382787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:59:14.184443  482115 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 19:59:14.186223  482115 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 19:59:14.186397  482115 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:59:14.186443  482115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:59:14.201853  482115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34133
	I0819 19:59:14.202377  482115 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:59:14.202966  482115 main.go:141] libmachine: Using API Version  1
	I0819 19:59:14.202990  482115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:59:14.203403  482115 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:59:14.203617  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetMachineName
	I0819 19:59:14.203782  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .DriverName
	I0819 19:59:14.203926  482115 start.go:159] libmachine.API.Create for "kubernetes-upgrade-382787" (driver="kvm2")
	I0819 19:59:14.203959  482115 client.go:168] LocalClient.Create starting
	I0819 19:59:14.203988  482115 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem
	I0819 19:59:14.204023  482115 main.go:141] libmachine: Decoding PEM data...
	I0819 19:59:14.204039  482115 main.go:141] libmachine: Parsing certificate...
	I0819 19:59:14.204084  482115 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem
	I0819 19:59:14.204102  482115 main.go:141] libmachine: Decoding PEM data...
	I0819 19:59:14.204113  482115 main.go:141] libmachine: Parsing certificate...
	I0819 19:59:14.204129  482115 main.go:141] libmachine: Running pre-create checks...
	I0819 19:59:14.204144  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .PreCreateCheck
	I0819 19:59:14.204519  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetConfigRaw
	I0819 19:59:14.204954  482115 main.go:141] libmachine: Creating machine...
	I0819 19:59:14.204971  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .Create
	I0819 19:59:14.205216  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Creating KVM machine...
	I0819 19:59:14.206526  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found existing default KVM network
	I0819 19:59:14.208078  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | I0819 19:59:14.207899  482139 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:3c:88:e7} reservation:<nil>}
	I0819 19:59:14.209617  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | I0819 19:59:14.209505  482139 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000284ad0}
	I0819 19:59:14.209648  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | created network xml: 
	I0819 19:59:14.209660  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | <network>
	I0819 19:59:14.209669  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG |   <name>mk-kubernetes-upgrade-382787</name>
	I0819 19:59:14.209712  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG |   <dns enable='no'/>
	I0819 19:59:14.209732  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG |   
	I0819 19:59:14.209740  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0819 19:59:14.209753  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG |     <dhcp>
	I0819 19:59:14.209766  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0819 19:59:14.209779  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG |     </dhcp>
	I0819 19:59:14.209788  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG |   </ip>
	I0819 19:59:14.209800  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG |   
	I0819 19:59:14.209811  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | </network>
	I0819 19:59:14.209823  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | 
	I0819 19:59:14.215072  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | trying to create private KVM network mk-kubernetes-upgrade-382787 192.168.50.0/24...
	I0819 19:59:14.295432  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | private KVM network mk-kubernetes-upgrade-382787 192.168.50.0/24 created
	I0819 19:59:14.295467  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Setting up store path in /home/jenkins/minikube-integration/19423-430949/.minikube/machines/kubernetes-upgrade-382787 ...
	I0819 19:59:14.295490  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Building disk image from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 19:59:14.295519  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | I0819 19:59:14.295475  482139 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:59:14.295720  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Downloading /home/jenkins/minikube-integration/19423-430949/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 19:59:14.588256  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | I0819 19:59:14.588094  482139 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/kubernetes-upgrade-382787/id_rsa...
	I0819 19:59:14.679803  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | I0819 19:59:14.679634  482139 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/kubernetes-upgrade-382787/kubernetes-upgrade-382787.rawdisk...
	I0819 19:59:14.679833  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | Writing magic tar header
	I0819 19:59:14.679869  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | Writing SSH key tar header
	I0819 19:59:14.679902  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | I0819 19:59:14.679762  482139 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/kubernetes-upgrade-382787 ...
	I0819 19:59:14.679917  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/kubernetes-upgrade-382787 (perms=drwx------)
	I0819 19:59:14.679956  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines (perms=drwxr-xr-x)
	I0819 19:59:14.679967  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube (perms=drwxr-xr-x)
	I0819 19:59:14.679976  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/kubernetes-upgrade-382787
	I0819 19:59:14.679987  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines
	I0819 19:59:14.679996  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:59:14.680007  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949 (perms=drwxrwxr-x)
	I0819 19:59:14.680021  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949
	I0819 19:59:14.680034  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 19:59:14.680050  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 19:59:14.680061  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Creating domain...
	I0819 19:59:14.680071  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 19:59:14.680079  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | Checking permissions on dir: /home/jenkins
	I0819 19:59:14.680086  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | Checking permissions on dir: /home
	I0819 19:59:14.680094  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | Skipping /home - not owner
	I0819 19:59:14.681287  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) define libvirt domain using xml: 
	I0819 19:59:14.681316  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) <domain type='kvm'>
	I0819 19:59:14.681327  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)   <name>kubernetes-upgrade-382787</name>
	I0819 19:59:14.681341  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)   <memory unit='MiB'>2200</memory>
	I0819 19:59:14.681350  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)   <vcpu>2</vcpu>
	I0819 19:59:14.681360  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)   <features>
	I0819 19:59:14.681369  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     <acpi/>
	I0819 19:59:14.681379  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     <apic/>
	I0819 19:59:14.681398  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     <pae/>
	I0819 19:59:14.681409  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     
	I0819 19:59:14.681437  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)   </features>
	I0819 19:59:14.681460  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)   <cpu mode='host-passthrough'>
	I0819 19:59:14.681470  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)   
	I0819 19:59:14.681481  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)   </cpu>
	I0819 19:59:14.681489  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)   <os>
	I0819 19:59:14.681501  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     <type>hvm</type>
	I0819 19:59:14.681512  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     <boot dev='cdrom'/>
	I0819 19:59:14.681520  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     <boot dev='hd'/>
	I0819 19:59:14.681527  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     <bootmenu enable='no'/>
	I0819 19:59:14.681536  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)   </os>
	I0819 19:59:14.681546  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)   <devices>
	I0819 19:59:14.681561  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     <disk type='file' device='cdrom'>
	I0819 19:59:14.681579  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/kubernetes-upgrade-382787/boot2docker.iso'/>
	I0819 19:59:14.681591  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)       <target dev='hdc' bus='scsi'/>
	I0819 19:59:14.681604  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)       <readonly/>
	I0819 19:59:14.681620  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     </disk>
	I0819 19:59:14.681662  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     <disk type='file' device='disk'>
	I0819 19:59:14.681690  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 19:59:14.681709  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/kubernetes-upgrade-382787/kubernetes-upgrade-382787.rawdisk'/>
	I0819 19:59:14.681724  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)       <target dev='hda' bus='virtio'/>
	I0819 19:59:14.681736  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     </disk>
	I0819 19:59:14.681744  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     <interface type='network'>
	I0819 19:59:14.681757  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)       <source network='mk-kubernetes-upgrade-382787'/>
	I0819 19:59:14.681767  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)       <model type='virtio'/>
	I0819 19:59:14.681781  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     </interface>
	I0819 19:59:14.681789  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     <interface type='network'>
	I0819 19:59:14.681795  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)       <source network='default'/>
	I0819 19:59:14.681806  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)       <model type='virtio'/>
	I0819 19:59:14.681830  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     </interface>
	I0819 19:59:14.681853  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     <serial type='pty'>
	I0819 19:59:14.681863  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)       <target port='0'/>
	I0819 19:59:14.681874  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     </serial>
	I0819 19:59:14.681886  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     <console type='pty'>
	I0819 19:59:14.681898  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)       <target type='serial' port='0'/>
	I0819 19:59:14.681908  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     </console>
	I0819 19:59:14.681919  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     <rng model='virtio'>
	I0819 19:59:14.681936  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)       <backend model='random'>/dev/random</backend>
	I0819 19:59:14.681953  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     </rng>
	I0819 19:59:14.681966  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     
	I0819 19:59:14.681976  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)     
	I0819 19:59:14.681985  482115 main.go:141] libmachine: (kubernetes-upgrade-382787)   </devices>
	I0819 19:59:14.681995  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) </domain>
	I0819 19:59:14.682017  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) 
	I0819 19:59:14.750404  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:d0:e1:42 in network default
	I0819 19:59:14.751188  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Ensuring networks are active...
	I0819 19:59:14.751219  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:14.751901  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Ensuring network default is active
	I0819 19:59:14.752272  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Ensuring network mk-kubernetes-upgrade-382787 is active
	I0819 19:59:14.752800  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Getting domain xml...
	I0819 19:59:14.753537  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Creating domain...
	I0819 19:59:16.261444  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Waiting to get IP...
	I0819 19:59:16.262449  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:16.262955  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | unable to find current IP address of domain kubernetes-upgrade-382787 in network mk-kubernetes-upgrade-382787
	I0819 19:59:16.263031  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | I0819 19:59:16.262939  482139 retry.go:31] will retry after 251.38786ms: waiting for machine to come up
	I0819 19:59:16.517094  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:16.517737  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | unable to find current IP address of domain kubernetes-upgrade-382787 in network mk-kubernetes-upgrade-382787
	I0819 19:59:16.517770  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | I0819 19:59:16.517679  482139 retry.go:31] will retry after 365.459682ms: waiting for machine to come up
	I0819 19:59:16.885264  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:16.885759  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | unable to find current IP address of domain kubernetes-upgrade-382787 in network mk-kubernetes-upgrade-382787
	I0819 19:59:16.885818  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | I0819 19:59:16.885700  482139 retry.go:31] will retry after 451.692441ms: waiting for machine to come up
	I0819 19:59:17.339505  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:17.340013  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | unable to find current IP address of domain kubernetes-upgrade-382787 in network mk-kubernetes-upgrade-382787
	I0819 19:59:17.340044  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | I0819 19:59:17.339973  482139 retry.go:31] will retry after 543.810521ms: waiting for machine to come up
	I0819 19:59:17.885791  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:17.886213  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | unable to find current IP address of domain kubernetes-upgrade-382787 in network mk-kubernetes-upgrade-382787
	I0819 19:59:17.886239  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | I0819 19:59:17.886168  482139 retry.go:31] will retry after 616.63957ms: waiting for machine to come up
	I0819 19:59:18.503912  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:18.504282  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | unable to find current IP address of domain kubernetes-upgrade-382787 in network mk-kubernetes-upgrade-382787
	I0819 19:59:18.504313  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | I0819 19:59:18.504232  482139 retry.go:31] will retry after 767.006267ms: waiting for machine to come up
	I0819 19:59:19.273003  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:19.273554  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | unable to find current IP address of domain kubernetes-upgrade-382787 in network mk-kubernetes-upgrade-382787
	I0819 19:59:19.273587  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | I0819 19:59:19.273485  482139 retry.go:31] will retry after 1.065891872s: waiting for machine to come up
	I0819 19:59:20.340931  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:20.341424  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | unable to find current IP address of domain kubernetes-upgrade-382787 in network mk-kubernetes-upgrade-382787
	I0819 19:59:20.341455  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | I0819 19:59:20.341371  482139 retry.go:31] will retry after 1.082656355s: waiting for machine to come up
	I0819 19:59:21.425787  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:21.426341  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | unable to find current IP address of domain kubernetes-upgrade-382787 in network mk-kubernetes-upgrade-382787
	I0819 19:59:21.426371  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | I0819 19:59:21.426285  482139 retry.go:31] will retry after 1.762679883s: waiting for machine to come up
	I0819 19:59:23.190758  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:23.191214  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | unable to find current IP address of domain kubernetes-upgrade-382787 in network mk-kubernetes-upgrade-382787
	I0819 19:59:23.191243  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | I0819 19:59:23.191159  482139 retry.go:31] will retry after 2.071874465s: waiting for machine to come up
	I0819 19:59:25.265068  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:25.274543  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | unable to find current IP address of domain kubernetes-upgrade-382787 in network mk-kubernetes-upgrade-382787
	I0819 19:59:25.274571  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | I0819 19:59:25.274468  482139 retry.go:31] will retry after 2.374838362s: waiting for machine to come up
	I0819 19:59:27.651969  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:27.652390  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | unable to find current IP address of domain kubernetes-upgrade-382787 in network mk-kubernetes-upgrade-382787
	I0819 19:59:27.652415  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | I0819 19:59:27.652349  482139 retry.go:31] will retry after 2.554280992s: waiting for machine to come up
	I0819 19:59:30.208763  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:30.209210  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | unable to find current IP address of domain kubernetes-upgrade-382787 in network mk-kubernetes-upgrade-382787
	I0819 19:59:30.209242  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | I0819 19:59:30.209163  482139 retry.go:31] will retry after 4.440333051s: waiting for machine to come up
	I0819 19:59:34.650693  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:34.651146  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Found IP for machine: 192.168.50.10
	I0819 19:59:34.651169  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has current primary IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:34.651191  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Reserving static IP address...
	I0819 19:59:34.651601  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-382787", mac: "52:54:00:a8:af:a1", ip: "192.168.50.10"} in network mk-kubernetes-upgrade-382787
	I0819 19:59:34.737216  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Reserved static IP address: 192.168.50.10
	I0819 19:59:34.737245  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | Getting to WaitForSSH function...
	I0819 19:59:34.737255  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Waiting for SSH to be available...
	I0819 19:59:34.739538  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:34.739905  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 20:59:29 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a8:af:a1}
	I0819 19:59:34.739938  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:34.740104  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | Using SSH client type: external
	I0819 19:59:34.740135  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/kubernetes-upgrade-382787/id_rsa (-rw-------)
	I0819 19:59:34.740175  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-430949/.minikube/machines/kubernetes-upgrade-382787/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:59:34.740198  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | About to run SSH command:
	I0819 19:59:34.740245  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | exit 0
	I0819 19:59:34.865155  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | SSH cmd err, output: <nil>: 
	I0819 19:59:34.865406  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) KVM machine creation complete!
	I0819 19:59:34.865740  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetConfigRaw
	I0819 19:59:34.866369  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .DriverName
	I0819 19:59:34.866567  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .DriverName
	I0819 19:59:34.866734  482115 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 19:59:34.866750  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetState
	I0819 19:59:34.868303  482115 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 19:59:34.868320  482115 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 19:59:34.868328  482115 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 19:59:34.868337  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 19:59:34.870669  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:34.871100  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 20:59:29 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 19:59:34.871131  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:34.871315  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHPort
	I0819 19:59:34.871509  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 19:59:34.871710  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 19:59:34.871882  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHUsername
	I0819 19:59:34.872069  482115 main.go:141] libmachine: Using SSH client type: native
	I0819 19:59:34.872274  482115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0819 19:59:34.872284  482115 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 19:59:34.976361  482115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:59:34.976391  482115 main.go:141] libmachine: Detecting the provisioner...
	I0819 19:59:34.976399  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 19:59:34.979567  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:34.979889  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 20:59:29 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 19:59:34.979924  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:34.980058  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHPort
	I0819 19:59:34.980285  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 19:59:34.980464  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 19:59:34.980638  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHUsername
	I0819 19:59:34.980809  482115 main.go:141] libmachine: Using SSH client type: native
	I0819 19:59:34.981019  482115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0819 19:59:34.981032  482115 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 19:59:35.085796  482115 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 19:59:35.085859  482115 main.go:141] libmachine: found compatible host: buildroot
	I0819 19:59:35.085877  482115 main.go:141] libmachine: Provisioning with buildroot...
	I0819 19:59:35.085888  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetMachineName
	I0819 19:59:35.086150  482115 buildroot.go:166] provisioning hostname "kubernetes-upgrade-382787"
	I0819 19:59:35.086178  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetMachineName
	I0819 19:59:35.086471  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 19:59:35.089251  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:35.089686  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 20:59:29 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 19:59:35.089710  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:35.089896  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHPort
	I0819 19:59:35.090126  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 19:59:35.090291  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 19:59:35.090408  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHUsername
	I0819 19:59:35.090612  482115 main.go:141] libmachine: Using SSH client type: native
	I0819 19:59:35.090790  482115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0819 19:59:35.090802  482115 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-382787 && echo "kubernetes-upgrade-382787" | sudo tee /etc/hostname
	I0819 19:59:35.213103  482115 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-382787
	
	I0819 19:59:35.213365  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 19:59:35.216221  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:35.216773  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 20:59:29 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 19:59:35.216812  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:35.217034  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHPort
	I0819 19:59:35.217267  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 19:59:35.217420  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 19:59:35.217615  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHUsername
	I0819 19:59:35.217804  482115 main.go:141] libmachine: Using SSH client type: native
	I0819 19:59:35.217977  482115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0819 19:59:35.217993  482115 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-382787' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-382787/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-382787' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:59:35.330232  482115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:59:35.330277  482115 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 19:59:35.330304  482115 buildroot.go:174] setting up certificates
	I0819 19:59:35.330317  482115 provision.go:84] configureAuth start
	I0819 19:59:35.330336  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetMachineName
	I0819 19:59:35.330662  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetIP
	I0819 19:59:35.333179  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:35.333587  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 20:59:29 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 19:59:35.333618  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:35.333804  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 19:59:35.335879  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:35.336185  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 20:59:29 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 19:59:35.336216  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:35.336327  482115 provision.go:143] copyHostCerts
	I0819 19:59:35.336381  482115 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 19:59:35.336397  482115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:59:35.336450  482115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 19:59:35.336565  482115 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 19:59:35.336578  482115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:59:35.336608  482115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 19:59:35.336682  482115 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 19:59:35.336690  482115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:59:35.336709  482115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 19:59:35.336762  482115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-382787 san=[127.0.0.1 192.168.50.10 kubernetes-upgrade-382787 localhost minikube]
	I0819 19:59:35.461987  482115 provision.go:177] copyRemoteCerts
	I0819 19:59:35.462048  482115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:59:35.462073  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 19:59:35.464610  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:35.464915  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 20:59:29 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 19:59:35.464945  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:35.465127  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHPort
	I0819 19:59:35.465349  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 19:59:35.465491  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHUsername
	I0819 19:59:35.465658  482115 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/kubernetes-upgrade-382787/id_rsa Username:docker}
	I0819 19:59:35.547657  482115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:59:35.572243  482115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0819 19:59:35.600008  482115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 19:59:35.623764  482115 provision.go:87] duration metric: took 293.426778ms to configureAuth
	I0819 19:59:35.623802  482115 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:59:35.624013  482115 config.go:182] Loaded profile config "kubernetes-upgrade-382787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 19:59:35.624116  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 19:59:35.627110  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:35.627506  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 20:59:29 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 19:59:35.627541  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:35.627775  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHPort
	I0819 19:59:35.628006  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 19:59:35.628178  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 19:59:35.628343  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHUsername
	I0819 19:59:35.628515  482115 main.go:141] libmachine: Using SSH client type: native
	I0819 19:59:35.628717  482115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0819 19:59:35.628743  482115 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:59:35.893369  482115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:59:35.893408  482115 main.go:141] libmachine: Checking connection to Docker...
	I0819 19:59:35.893425  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetURL
	I0819 19:59:35.894758  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | Using libvirt version 6000000
	I0819 19:59:35.896877  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:35.897209  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 20:59:29 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 19:59:35.897242  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:35.897405  482115 main.go:141] libmachine: Docker is up and running!
	I0819 19:59:35.897420  482115 main.go:141] libmachine: Reticulating splines...
	I0819 19:59:35.897427  482115 client.go:171] duration metric: took 21.693458718s to LocalClient.Create
	I0819 19:59:35.897452  482115 start.go:167] duration metric: took 21.693528507s to libmachine.API.Create "kubernetes-upgrade-382787"
	I0819 19:59:35.897460  482115 start.go:293] postStartSetup for "kubernetes-upgrade-382787" (driver="kvm2")
	I0819 19:59:35.897470  482115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:59:35.897487  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .DriverName
	I0819 19:59:35.897762  482115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:59:35.897795  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 19:59:35.899954  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:35.900285  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 20:59:29 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 19:59:35.900325  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:35.900501  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHPort
	I0819 19:59:35.900727  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 19:59:35.900897  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHUsername
	I0819 19:59:35.901037  482115 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/kubernetes-upgrade-382787/id_rsa Username:docker}
	I0819 19:59:35.983390  482115 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:59:35.987794  482115 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:59:35.987832  482115 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 19:59:35.987920  482115 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 19:59:35.988017  482115 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 19:59:35.988116  482115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:59:35.998100  482115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:59:36.022796  482115 start.go:296] duration metric: took 125.322114ms for postStartSetup
	I0819 19:59:36.022860  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetConfigRaw
	I0819 19:59:36.023534  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetIP
	I0819 19:59:36.026125  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:36.026572  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 20:59:29 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 19:59:36.026608  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:36.026816  482115 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/config.json ...
	I0819 19:59:36.027053  482115 start.go:128] duration metric: took 21.842597896s to createHost
	I0819 19:59:36.027079  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 19:59:36.029362  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:36.029721  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 20:59:29 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 19:59:36.029754  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:36.029872  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHPort
	I0819 19:59:36.030086  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 19:59:36.030277  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 19:59:36.030438  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHUsername
	I0819 19:59:36.030620  482115 main.go:141] libmachine: Using SSH client type: native
	I0819 19:59:36.030839  482115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0819 19:59:36.030855  482115 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:59:36.133733  482115 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724097576.105214322
	
	I0819 19:59:36.133770  482115 fix.go:216] guest clock: 1724097576.105214322
	I0819 19:59:36.133784  482115 fix.go:229] Guest: 2024-08-19 19:59:36.105214322 +0000 UTC Remote: 2024-08-19 19:59:36.02706491 +0000 UTC m=+21.962866926 (delta=78.149412ms)
	I0819 19:59:36.133857  482115 fix.go:200] guest clock delta is within tolerance: 78.149412ms
	I0819 19:59:36.133868  482115 start.go:83] releasing machines lock for "kubernetes-upgrade-382787", held for 21.949529577s
	I0819 19:59:36.133909  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .DriverName
	I0819 19:59:36.134229  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetIP
	I0819 19:59:36.136999  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:36.137410  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 20:59:29 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 19:59:36.137443  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:36.137582  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .DriverName
	I0819 19:59:36.138045  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .DriverName
	I0819 19:59:36.138218  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .DriverName
	I0819 19:59:36.138300  482115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:59:36.138357  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 19:59:36.138407  482115 ssh_runner.go:195] Run: cat /version.json
	I0819 19:59:36.138430  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 19:59:36.141174  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:36.141309  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:36.141548  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 20:59:29 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 19:59:36.141600  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:36.141635  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 20:59:29 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 19:59:36.141651  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:36.141775  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHPort
	I0819 19:59:36.141887  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHPort
	I0819 19:59:36.142005  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 19:59:36.142094  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 19:59:36.142176  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHUsername
	I0819 19:59:36.142219  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHUsername
	I0819 19:59:36.142299  482115 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/kubernetes-upgrade-382787/id_rsa Username:docker}
	I0819 19:59:36.142385  482115 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/kubernetes-upgrade-382787/id_rsa Username:docker}
	I0819 19:59:36.249232  482115 ssh_runner.go:195] Run: systemctl --version
	I0819 19:59:36.258554  482115 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:59:36.429294  482115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:59:36.435214  482115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:59:36.435283  482115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:59:36.451951  482115 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:59:36.451976  482115 start.go:495] detecting cgroup driver to use...
	I0819 19:59:36.452035  482115 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:59:36.470425  482115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:59:36.485256  482115 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:59:36.485329  482115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:59:36.499898  482115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:59:36.514250  482115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:59:36.635643  482115 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:59:36.799941  482115 docker.go:233] disabling docker service ...
	I0819 19:59:36.800008  482115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:59:36.817544  482115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:59:36.834440  482115 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:59:36.964606  482115 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:59:37.098479  482115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:59:37.114553  482115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:59:37.133305  482115 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 19:59:37.133373  482115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:59:37.144653  482115 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:59:37.144727  482115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:59:37.156416  482115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:59:37.167351  482115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:59:37.178236  482115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:59:37.190303  482115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:59:37.199985  482115 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:59:37.200070  482115 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:59:37.214611  482115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:59:37.225901  482115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:59:37.362441  482115 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:59:37.505362  482115 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:59:37.505456  482115 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:59:37.510253  482115 start.go:563] Will wait 60s for crictl version
	I0819 19:59:37.510329  482115 ssh_runner.go:195] Run: which crictl
	I0819 19:59:37.514356  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:59:37.553068  482115 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:59:37.553191  482115 ssh_runner.go:195] Run: crio --version
	I0819 19:59:37.582495  482115 ssh_runner.go:195] Run: crio --version
	I0819 19:59:37.617525  482115 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 19:59:37.618916  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetIP
	I0819 19:59:37.622370  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:37.622908  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 20:59:29 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 19:59:37.622943  482115 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 19:59:37.623253  482115 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 19:59:37.627768  482115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:59:37.640732  482115 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-382787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-382787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:59:37.640861  482115 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 19:59:37.640907  482115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:59:37.678463  482115 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 19:59:37.678553  482115 ssh_runner.go:195] Run: which lz4
	I0819 19:59:37.682894  482115 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:59:37.687311  482115 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:59:37.687366  482115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 19:59:39.247075  482115 crio.go:462] duration metric: took 1.564227701s to copy over tarball
	I0819 19:59:39.247175  482115 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:59:41.904508  482115 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.657274836s)
	I0819 19:59:41.904543  482115 crio.go:469] duration metric: took 2.657424044s to extract the tarball
	I0819 19:59:41.904554  482115 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:59:41.949439  482115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:59:41.995676  482115 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 19:59:41.995711  482115 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 19:59:41.995806  482115 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:59:41.995871  482115 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:59:41.995822  482115 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:59:41.995926  482115 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 19:59:41.995894  482115 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:59:41.995846  482115 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:59:41.995881  482115 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 19:59:41.995817  482115 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:59:41.997386  482115 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:59:41.997415  482115 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:59:41.997388  482115 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:59:41.997387  482115 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 19:59:41.997391  482115 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:59:41.997394  482115 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:59:41.997455  482115 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:59:41.997563  482115 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 19:59:42.173507  482115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 19:59:42.174120  482115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:59:42.175981  482115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:59:42.178669  482115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 19:59:42.181195  482115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:59:42.185600  482115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 19:59:42.196124  482115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:59:42.281060  482115 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 19:59:42.281116  482115 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 19:59:42.281180  482115 ssh_runner.go:195] Run: which crictl
	I0819 19:59:42.325476  482115 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 19:59:42.325527  482115 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:59:42.325540  482115 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 19:59:42.325578  482115 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:59:42.325585  482115 ssh_runner.go:195] Run: which crictl
	I0819 19:59:42.325622  482115 ssh_runner.go:195] Run: which crictl
	I0819 19:59:42.325650  482115 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 19:59:42.325684  482115 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:59:42.325725  482115 ssh_runner.go:195] Run: which crictl
	I0819 19:59:42.356238  482115 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 19:59:42.356296  482115 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:59:42.356353  482115 ssh_runner.go:195] Run: which crictl
	I0819 19:59:42.356364  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:59:42.356258  482115 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 19:59:42.356298  482115 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 19:59:42.356393  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:59:42.356411  482115 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 19:59:42.356415  482115 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:59:42.356440  482115 ssh_runner.go:195] Run: which crictl
	I0819 19:59:42.356463  482115 ssh_runner.go:195] Run: which crictl
	I0819 19:59:42.356442  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:59:42.356508  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:59:42.451719  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:59:42.451814  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:59:42.451826  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:59:42.451826  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:59:42.451940  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:59:42.451946  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:59:42.452103  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:59:42.617058  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:59:42.617110  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:59:42.617165  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:59:42.617186  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:59:42.617246  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:59:42.617262  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:59:42.617309  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:59:42.649740  482115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:59:42.785929  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:59:42.786046  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:59:42.786059  482115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:59:42.786111  482115 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 19:59:42.786232  482115 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 19:59:42.786306  482115 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 19:59:42.786322  482115 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 19:59:42.896892  482115 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 19:59:42.896951  482115 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 19:59:42.896969  482115 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 19:59:42.897028  482115 cache_images.go:92] duration metric: took 901.301383ms to LoadCachedImages
	W0819 19:59:42.897149  482115 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0819 19:59:42.897168  482115 kubeadm.go:934] updating node { 192.168.50.10 8443 v1.20.0 crio true true} ...
	I0819 19:59:42.897297  482115 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-382787 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-382787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:59:42.897384  482115 ssh_runner.go:195] Run: crio config
	I0819 19:59:42.951788  482115 cni.go:84] Creating CNI manager for ""
	I0819 19:59:42.951817  482115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:59:42.951832  482115 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:59:42.951859  482115 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.10 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-382787 NodeName:kubernetes-upgrade-382787 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 19:59:42.952003  482115 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-382787"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:59:42.952070  482115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 19:59:42.963458  482115 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:59:42.963568  482115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:59:42.974767  482115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0819 19:59:42.993420  482115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:59:43.011770  482115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0819 19:59:43.030210  482115 ssh_runner.go:195] Run: grep 192.168.50.10	control-plane.minikube.internal$ /etc/hosts
	I0819 19:59:43.034208  482115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:59:43.047837  482115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:59:43.166194  482115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:59:43.183545  482115 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787 for IP: 192.168.50.10
	I0819 19:59:43.183570  482115 certs.go:194] generating shared ca certs ...
	I0819 19:59:43.183593  482115 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:59:43.183778  482115 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 19:59:43.183846  482115 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 19:59:43.183861  482115 certs.go:256] generating profile certs ...
	I0819 19:59:43.183936  482115 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/client.key
	I0819 19:59:43.183966  482115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/client.crt with IP's: []
	I0819 19:59:43.419076  482115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/client.crt ...
	I0819 19:59:43.419126  482115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/client.crt: {Name:mk2b61c350b00307118ff6bc9ffa7c406edbf329 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:59:43.419351  482115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/client.key ...
	I0819 19:59:43.419370  482115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/client.key: {Name:mk0d4b9e33f5a83e111741e27b3c761286c2ebb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:59:43.419470  482115 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/apiserver.key.87b885b4
	I0819 19:59:43.419502  482115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/apiserver.crt.87b885b4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.10]
	I0819 19:59:43.493828  482115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/apiserver.crt.87b885b4 ...
	I0819 19:59:43.493857  482115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/apiserver.crt.87b885b4: {Name:mk8cb0f5490354aa4e9dace2eb533591f56bd293 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:59:43.494015  482115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/apiserver.key.87b885b4 ...
	I0819 19:59:43.494029  482115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/apiserver.key.87b885b4: {Name:mk26b3be43d4a09dbdaec1b233dddf8fb10210b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:59:43.494111  482115 certs.go:381] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/apiserver.crt.87b885b4 -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/apiserver.crt
	I0819 19:59:43.494180  482115 certs.go:385] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/apiserver.key.87b885b4 -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/apiserver.key
	I0819 19:59:43.494230  482115 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/proxy-client.key
	I0819 19:59:43.494246  482115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/proxy-client.crt with IP's: []
	I0819 19:59:43.638114  482115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/proxy-client.crt ...
	I0819 19:59:43.638148  482115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/proxy-client.crt: {Name:mka35c234623e2834c7bfccf250a6c2a77791bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:59:43.638314  482115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/proxy-client.key ...
	I0819 19:59:43.638327  482115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/proxy-client.key: {Name:mkb1e51cab1c675a36eaadf2cf23ad9616514d7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:59:43.638509  482115 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 19:59:43.638549  482115 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 19:59:43.638559  482115 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 19:59:43.638581  482115 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:59:43.638603  482115 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:59:43.638624  482115 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 19:59:43.638662  482115 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:59:43.639333  482115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:59:43.665554  482115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:59:43.691912  482115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:59:43.719260  482115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 19:59:43.746272  482115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 19:59:43.773309  482115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:59:43.800394  482115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:59:43.827126  482115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:59:43.853345  482115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 19:59:43.878796  482115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:59:43.903759  482115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 19:59:43.929983  482115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:59:43.949398  482115 ssh_runner.go:195] Run: openssl version
	I0819 19:59:43.955644  482115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:59:43.966657  482115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:59:43.971316  482115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:59:43.971396  482115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:59:43.977290  482115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:59:43.990709  482115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 19:59:44.002045  482115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 19:59:44.008407  482115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 19:59:44.008489  482115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 19:59:44.018205  482115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 19:59:44.033943  482115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 19:59:44.052138  482115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 19:59:44.061252  482115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 19:59:44.061342  482115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 19:59:44.068754  482115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:59:44.087218  482115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:59:44.092555  482115 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 19:59:44.092630  482115 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-382787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-382787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:59:44.092751  482115 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:59:44.092821  482115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:59:44.129354  482115 cri.go:89] found id: ""
	I0819 19:59:44.129438  482115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:59:44.139404  482115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:59:44.149329  482115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:59:44.159830  482115 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:59:44.159858  482115 kubeadm.go:157] found existing configuration files:
	
	I0819 19:59:44.159930  482115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:59:44.169400  482115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:59:44.169460  482115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:59:44.179567  482115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:59:44.189064  482115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:59:44.189163  482115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:59:44.199218  482115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:59:44.208814  482115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:59:44.208890  482115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:59:44.218473  482115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:59:44.227878  482115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:59:44.227961  482115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:59:44.237751  482115 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:59:44.495227  482115 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 20:01:41.880726  482115 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 20:01:41.880884  482115 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 20:01:41.882300  482115 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 20:01:41.882367  482115 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 20:01:41.882510  482115 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 20:01:41.882610  482115 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 20:01:41.882697  482115 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 20:01:41.882756  482115 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 20:01:41.884440  482115 out.go:235]   - Generating certificates and keys ...
	I0819 20:01:41.884575  482115 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 20:01:41.884659  482115 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 20:01:41.884769  482115 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 20:01:41.884882  482115 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 20:01:41.884990  482115 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 20:01:41.885064  482115 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 20:01:41.885169  482115 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 20:01:41.885346  482115 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-382787 localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
	I0819 20:01:41.885443  482115 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 20:01:41.885629  482115 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-382787 localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
	I0819 20:01:41.885748  482115 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 20:01:41.885863  482115 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 20:01:41.885931  482115 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 20:01:41.886005  482115 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 20:01:41.886084  482115 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 20:01:41.886156  482115 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 20:01:41.886247  482115 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 20:01:41.886331  482115 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 20:01:41.886475  482115 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 20:01:41.886586  482115 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 20:01:41.886640  482115 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 20:01:41.886737  482115 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 20:01:41.888048  482115 out.go:235]   - Booting up control plane ...
	I0819 20:01:41.888185  482115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 20:01:41.888277  482115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 20:01:41.888353  482115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 20:01:41.888421  482115 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 20:01:41.888607  482115 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 20:01:41.888689  482115 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 20:01:41.888790  482115 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 20:01:41.889040  482115 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 20:01:41.889105  482115 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 20:01:41.889291  482115 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 20:01:41.889391  482115 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 20:01:41.889621  482115 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 20:01:41.889698  482115 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 20:01:41.889906  482115 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 20:01:41.890015  482115 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 20:01:41.890264  482115 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 20:01:41.890281  482115 kubeadm.go:310] 
	I0819 20:01:41.890337  482115 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 20:01:41.890394  482115 kubeadm.go:310] 		timed out waiting for the condition
	I0819 20:01:41.890405  482115 kubeadm.go:310] 
	I0819 20:01:41.890479  482115 kubeadm.go:310] 	This error is likely caused by:
	I0819 20:01:41.890524  482115 kubeadm.go:310] 		- The kubelet is not running
	I0819 20:01:41.890679  482115 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 20:01:41.890695  482115 kubeadm.go:310] 
	I0819 20:01:41.890820  482115 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 20:01:41.890874  482115 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 20:01:41.890930  482115 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 20:01:41.890949  482115 kubeadm.go:310] 
	I0819 20:01:41.891092  482115 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 20:01:41.891217  482115 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 20:01:41.891230  482115 kubeadm.go:310] 
	I0819 20:01:41.891371  482115 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 20:01:41.891476  482115 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 20:01:41.891578  482115 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 20:01:41.891696  482115 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 20:01:41.891800  482115 kubeadm.go:310] 
	W0819 20:01:41.891994  482115 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-382787 localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-382787 localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-382787 localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-382787 localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 20:01:41.892055  482115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 20:01:42.830087  482115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:01:42.848266  482115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 20:01:42.858759  482115 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 20:01:42.858790  482115 kubeadm.go:157] found existing configuration files:
	
	I0819 20:01:42.858856  482115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 20:01:42.868747  482115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 20:01:42.868830  482115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 20:01:42.879286  482115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 20:01:42.892764  482115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 20:01:42.892851  482115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 20:01:42.906622  482115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 20:01:42.916157  482115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 20:01:42.916225  482115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 20:01:42.926168  482115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 20:01:42.935752  482115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 20:01:42.935837  482115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 20:01:42.950279  482115 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 20:01:43.029093  482115 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 20:01:43.029194  482115 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 20:01:43.169755  482115 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 20:01:43.169903  482115 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 20:01:43.170019  482115 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 20:01:43.350363  482115 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 20:01:43.352366  482115 out.go:235]   - Generating certificates and keys ...
	I0819 20:01:43.352486  482115 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 20:01:43.352587  482115 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 20:01:43.352694  482115 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 20:01:43.352778  482115 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 20:01:43.352897  482115 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 20:01:43.352983  482115 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 20:01:43.353081  482115 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 20:01:43.353184  482115 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 20:01:43.353287  482115 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 20:01:43.353709  482115 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 20:01:43.353767  482115 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 20:01:43.353833  482115 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 20:01:43.477563  482115 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 20:01:43.663160  482115 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 20:01:43.771781  482115 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 20:01:44.107562  482115 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 20:01:44.129886  482115 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 20:01:44.131711  482115 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 20:01:44.131794  482115 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 20:01:44.280048  482115 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 20:01:44.281898  482115 out.go:235]   - Booting up control plane ...
	I0819 20:01:44.282045  482115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 20:01:44.285347  482115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 20:01:44.286653  482115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 20:01:44.287665  482115 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 20:01:44.290443  482115 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 20:02:24.293577  482115 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 20:02:24.293934  482115 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 20:02:24.294205  482115 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 20:02:29.294767  482115 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 20:02:29.295148  482115 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 20:02:39.296561  482115 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 20:02:39.296791  482115 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 20:02:59.295637  482115 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 20:02:59.295849  482115 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 20:03:39.294581  482115 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 20:03:39.294887  482115 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 20:03:39.294901  482115 kubeadm.go:310] 
	I0819 20:03:39.294951  482115 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 20:03:39.295027  482115 kubeadm.go:310] 		timed out waiting for the condition
	I0819 20:03:39.295057  482115 kubeadm.go:310] 
	I0819 20:03:39.295099  482115 kubeadm.go:310] 	This error is likely caused by:
	I0819 20:03:39.295153  482115 kubeadm.go:310] 		- The kubelet is not running
	I0819 20:03:39.295315  482115 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 20:03:39.295333  482115 kubeadm.go:310] 
	I0819 20:03:39.295503  482115 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 20:03:39.295579  482115 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 20:03:39.295615  482115 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 20:03:39.295622  482115 kubeadm.go:310] 
	I0819 20:03:39.295708  482115 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 20:03:39.295777  482115 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 20:03:39.295784  482115 kubeadm.go:310] 
	I0819 20:03:39.295931  482115 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 20:03:39.296022  482115 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 20:03:39.296119  482115 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 20:03:39.296205  482115 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 20:03:39.296216  482115 kubeadm.go:310] 
	I0819 20:03:39.296525  482115 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 20:03:39.296638  482115 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 20:03:39.296727  482115 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 20:03:39.296822  482115 kubeadm.go:394] duration metric: took 3m55.204199846s to StartCluster
	I0819 20:03:39.296876  482115 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:03:39.296954  482115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:03:39.336615  482115 cri.go:89] found id: ""
	I0819 20:03:39.336662  482115 logs.go:276] 0 containers: []
	W0819 20:03:39.336675  482115 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:03:39.336683  482115 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:03:39.336762  482115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:03:39.375726  482115 cri.go:89] found id: ""
	I0819 20:03:39.375763  482115 logs.go:276] 0 containers: []
	W0819 20:03:39.375775  482115 logs.go:278] No container was found matching "etcd"
	I0819 20:03:39.375785  482115 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:03:39.375850  482115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:03:39.411274  482115 cri.go:89] found id: ""
	I0819 20:03:39.411306  482115 logs.go:276] 0 containers: []
	W0819 20:03:39.411316  482115 logs.go:278] No container was found matching "coredns"
	I0819 20:03:39.411322  482115 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:03:39.411374  482115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:03:39.448503  482115 cri.go:89] found id: ""
	I0819 20:03:39.448538  482115 logs.go:276] 0 containers: []
	W0819 20:03:39.448550  482115 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:03:39.448572  482115 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:03:39.448644  482115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:03:39.488040  482115 cri.go:89] found id: ""
	I0819 20:03:39.488075  482115 logs.go:276] 0 containers: []
	W0819 20:03:39.488088  482115 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:03:39.488104  482115 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:03:39.488174  482115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:03:39.520722  482115 cri.go:89] found id: ""
	I0819 20:03:39.520768  482115 logs.go:276] 0 containers: []
	W0819 20:03:39.520781  482115 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:03:39.520792  482115 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:03:39.520877  482115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:03:39.555251  482115 cri.go:89] found id: ""
	I0819 20:03:39.555279  482115 logs.go:276] 0 containers: []
	W0819 20:03:39.555291  482115 logs.go:278] No container was found matching "kindnet"
	I0819 20:03:39.555307  482115 logs.go:123] Gathering logs for kubelet ...
	I0819 20:03:39.555322  482115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:03:39.606358  482115 logs.go:123] Gathering logs for dmesg ...
	I0819 20:03:39.606416  482115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:03:39.619766  482115 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:03:39.619800  482115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:03:39.742709  482115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:03:39.742749  482115 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:03:39.742766  482115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:03:39.856610  482115 logs.go:123] Gathering logs for container status ...
	I0819 20:03:39.856653  482115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0819 20:03:39.897274  482115 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 20:03:39.897352  482115 out.go:270] * 
	* 
	W0819 20:03:39.897422  482115 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 20:03:39.897474  482115 out.go:270] * 
	* 
	W0819 20:03:39.898593  482115 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 20:03:39.901753  482115 out.go:201] 
	W0819 20:03:39.902872  482115 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 20:03:39.902927  482115 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 20:03:39.902946  482115 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 20:03:39.904242  482115 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-382787 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-382787
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-382787: (1.329129942s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-382787 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-382787 status --format={{.Host}}: exit status 7 (65.890487ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-382787 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-382787 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.029145687s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-382787 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-382787 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-382787 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (89.587011ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-382787] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-382787
	    minikube start -p kubernetes-upgrade-382787 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3827872 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-382787 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-382787 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0819 20:04:38.961240  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:04:56.398222  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-382787 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (13m55.979864147s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-382787] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "kubernetes-upgrade-382787" primary control-plane node in "kubernetes-upgrade-382787" cluster
	* Updating the running kvm2 "kubernetes-upgrade-382787" VM ...
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 20:04:20.589198  486208 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:04:20.589496  486208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:04:20.589508  486208 out.go:358] Setting ErrFile to fd 2...
	I0819 20:04:20.589512  486208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:04:20.589768  486208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 20:04:20.590370  486208 out.go:352] Setting JSON to false
	I0819 20:04:20.591428  486208 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13612,"bootTime":1724084249,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 20:04:20.591500  486208 start.go:139] virtualization: kvm guest
	I0819 20:04:20.593513  486208 out.go:177] * [kubernetes-upgrade-382787] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 20:04:20.595297  486208 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 20:04:20.595288  486208 notify.go:220] Checking for updates...
	I0819 20:04:20.596753  486208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 20:04:20.598203  486208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 20:04:20.599600  486208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 20:04:20.600858  486208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 20:04:20.602246  486208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 20:04:20.603996  486208 config.go:182] Loaded profile config "kubernetes-upgrade-382787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:04:20.604655  486208 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:04:20.604745  486208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:04:20.620832  486208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43523
	I0819 20:04:20.621481  486208 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:04:20.622156  486208 main.go:141] libmachine: Using API Version  1
	I0819 20:04:20.622184  486208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:04:20.622622  486208 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:04:20.622857  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .DriverName
	I0819 20:04:20.623210  486208 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 20:04:20.623694  486208 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:04:20.623747  486208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:04:20.640134  486208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34385
	I0819 20:04:20.640601  486208 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:04:20.641288  486208 main.go:141] libmachine: Using API Version  1
	I0819 20:04:20.641336  486208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:04:20.641720  486208 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:04:20.641944  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .DriverName
	I0819 20:04:20.681897  486208 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 20:04:20.683013  486208 start.go:297] selected driver: kvm2
	I0819 20:04:20.683040  486208 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-382787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-382787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:04:20.683174  486208 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 20:04:20.683983  486208 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 20:04:20.684070  486208 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 20:04:20.705846  486208 install.go:137] /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0819 20:04:20.706358  486208 cni.go:84] Creating CNI manager for ""
	I0819 20:04:20.706379  486208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 20:04:20.706428  486208 start.go:340] cluster config:
	{Name:kubernetes-upgrade-382787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-382787 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:04:20.706556  486208 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 20:04:20.707978  486208 out.go:177] * Starting "kubernetes-upgrade-382787" primary control-plane node in "kubernetes-upgrade-382787" cluster
	I0819 20:04:20.709006  486208 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 20:04:20.709042  486208 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 20:04:20.709053  486208 cache.go:56] Caching tarball of preloaded images
	I0819 20:04:20.709176  486208 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 20:04:20.709188  486208 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 20:04:20.709276  486208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/config.json ...
	I0819 20:04:20.709467  486208 start.go:360] acquireMachinesLock for kubernetes-upgrade-382787: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 20:04:20.709516  486208 start.go:364] duration metric: took 31.146µs to acquireMachinesLock for "kubernetes-upgrade-382787"
	I0819 20:04:20.709529  486208 start.go:96] Skipping create...Using existing machine configuration
	I0819 20:04:20.709537  486208 fix.go:54] fixHost starting: 
	I0819 20:04:20.709793  486208 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:04:20.709822  486208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:04:20.726294  486208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41981
	I0819 20:04:20.726889  486208 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:04:20.727592  486208 main.go:141] libmachine: Using API Version  1
	I0819 20:04:20.727618  486208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:04:20.727997  486208 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:04:20.728249  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .DriverName
	I0819 20:04:20.728413  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetState
	I0819 20:04:20.730437  486208 fix.go:112] recreateIfNeeded on kubernetes-upgrade-382787: state=Running err=<nil>
	W0819 20:04:20.730465  486208 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 20:04:20.731926  486208 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-382787" VM ...
	I0819 20:04:20.733163  486208 machine.go:93] provisionDockerMachine start ...
	I0819 20:04:20.733197  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .DriverName
	I0819 20:04:20.733485  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 20:04:20.736506  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:20.736902  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 21:03:52 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 20:04:20.736950  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:20.737155  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHPort
	I0819 20:04:20.737401  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 20:04:20.737615  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 20:04:20.737762  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHUsername
	I0819 20:04:20.737944  486208 main.go:141] libmachine: Using SSH client type: native
	I0819 20:04:20.738147  486208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0819 20:04:20.738159  486208 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 20:04:20.939220  486208 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-382787
	
	I0819 20:04:20.939259  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetMachineName
	I0819 20:04:20.939546  486208 buildroot.go:166] provisioning hostname "kubernetes-upgrade-382787"
	I0819 20:04:20.939587  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetMachineName
	I0819 20:04:20.939763  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 20:04:20.942684  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:20.942999  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 21:03:52 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 20:04:20.943020  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:20.943364  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHPort
	I0819 20:04:20.943589  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 20:04:20.943778  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 20:04:20.943969  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHUsername
	I0819 20:04:20.944175  486208 main.go:141] libmachine: Using SSH client type: native
	I0819 20:04:20.944403  486208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0819 20:04:20.944422  486208 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-382787 && echo "kubernetes-upgrade-382787" | sudo tee /etc/hostname
	I0819 20:04:21.173782  486208 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-382787
	
	I0819 20:04:21.173813  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 20:04:21.176881  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:21.177257  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 21:03:52 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 20:04:21.177304  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:21.177482  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHPort
	I0819 20:04:21.177741  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 20:04:21.177923  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 20:04:21.178087  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHUsername
	I0819 20:04:21.178283  486208 main.go:141] libmachine: Using SSH client type: native
	I0819 20:04:21.178534  486208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0819 20:04:21.178560  486208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-382787' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-382787/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-382787' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 20:04:21.330646  486208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 20:04:21.330681  486208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 20:04:21.330707  486208 buildroot.go:174] setting up certificates
	I0819 20:04:21.330722  486208 provision.go:84] configureAuth start
	I0819 20:04:21.330736  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetMachineName
	I0819 20:04:21.331044  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetIP
	I0819 20:04:21.334332  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:21.334753  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 21:03:52 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 20:04:21.334798  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:21.334968  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 20:04:21.337476  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:21.337861  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 21:03:52 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 20:04:21.337894  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:21.338024  486208 provision.go:143] copyHostCerts
	I0819 20:04:21.338096  486208 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 20:04:21.338117  486208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 20:04:21.338172  486208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 20:04:21.338255  486208 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 20:04:21.338263  486208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 20:04:21.338282  486208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 20:04:21.338331  486208 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 20:04:21.338338  486208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 20:04:21.338353  486208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 20:04:21.338398  486208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-382787 san=[127.0.0.1 192.168.50.10 kubernetes-upgrade-382787 localhost minikube]
	I0819 20:04:21.639363  486208 provision.go:177] copyRemoteCerts
	I0819 20:04:21.639439  486208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 20:04:21.639472  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 20:04:21.642688  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:21.643106  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 21:03:52 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 20:04:21.643143  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:21.643341  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHPort
	I0819 20:04:21.643573  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 20:04:21.643726  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHUsername
	I0819 20:04:21.643911  486208 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/kubernetes-upgrade-382787/id_rsa Username:docker}
	I0819 20:04:21.760462  486208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 20:04:21.809689  486208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0819 20:04:21.838753  486208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 20:04:21.864825  486208 provision.go:87] duration metric: took 534.079911ms to configureAuth
	I0819 20:04:21.864862  486208 buildroot.go:189] setting minikube options for container-runtime
	I0819 20:04:21.865054  486208 config.go:182] Loaded profile config "kubernetes-upgrade-382787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:04:21.865199  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 20:04:21.867775  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:21.868097  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 21:03:52 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 20:04:21.868132  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:21.868351  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHPort
	I0819 20:04:21.868564  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 20:04:21.868718  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 20:04:21.868825  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHUsername
	I0819 20:04:21.869013  486208 main.go:141] libmachine: Using SSH client type: native
	I0819 20:04:21.869235  486208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0819 20:04:21.869256  486208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 20:04:32.318421  486208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 20:04:32.318453  486208 machine.go:96] duration metric: took 11.585269826s to provisionDockerMachine
	I0819 20:04:32.318468  486208 start.go:293] postStartSetup for "kubernetes-upgrade-382787" (driver="kvm2")
	I0819 20:04:32.318479  486208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 20:04:32.318495  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .DriverName
	I0819 20:04:32.318876  486208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 20:04:32.318903  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 20:04:32.321612  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:32.321964  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 21:03:52 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 20:04:32.321993  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:32.322212  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHPort
	I0819 20:04:32.322453  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 20:04:32.322636  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHUsername
	I0819 20:04:32.322788  486208 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/kubernetes-upgrade-382787/id_rsa Username:docker}
	I0819 20:04:32.403298  486208 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 20:04:32.408197  486208 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 20:04:32.408232  486208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 20:04:32.408316  486208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 20:04:32.408407  486208 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 20:04:32.408535  486208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 20:04:32.418889  486208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 20:04:32.447386  486208 start.go:296] duration metric: took 128.899916ms for postStartSetup
	I0819 20:04:32.447431  486208 fix.go:56] duration metric: took 11.737892747s for fixHost
	I0819 20:04:32.447453  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 20:04:32.450197  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:32.450492  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 21:03:52 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 20:04:32.450532  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:32.450694  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHPort
	I0819 20:04:32.450950  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 20:04:32.451127  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 20:04:32.451293  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHUsername
	I0819 20:04:32.451442  486208 main.go:141] libmachine: Using SSH client type: native
	I0819 20:04:32.451618  486208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0819 20:04:32.451629  486208 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 20:04:32.554086  486208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724097872.545477306
	
	I0819 20:04:32.554119  486208 fix.go:216] guest clock: 1724097872.545477306
	I0819 20:04:32.554130  486208 fix.go:229] Guest: 2024-08-19 20:04:32.545477306 +0000 UTC Remote: 2024-08-19 20:04:32.44743461 +0000 UTC m=+11.895491257 (delta=98.042696ms)
	I0819 20:04:32.554159  486208 fix.go:200] guest clock delta is within tolerance: 98.042696ms
	I0819 20:04:32.554167  486208 start.go:83] releasing machines lock for "kubernetes-upgrade-382787", held for 11.844642728s
	I0819 20:04:32.554191  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .DriverName
	I0819 20:04:32.554523  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetIP
	I0819 20:04:32.557434  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:32.557749  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 21:03:52 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 20:04:32.557786  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:32.557983  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .DriverName
	I0819 20:04:32.558561  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .DriverName
	I0819 20:04:32.558778  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .DriverName
	I0819 20:04:32.558860  486208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 20:04:32.558912  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 20:04:32.559033  486208 ssh_runner.go:195] Run: cat /version.json
	I0819 20:04:32.559058  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHHostname
	I0819 20:04:32.561621  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:32.561955  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 21:03:52 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 20:04:32.561983  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:32.562007  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:32.562127  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHPort
	I0819 20:04:32.562316  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 20:04:32.562439  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 21:03:52 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 20:04:32.562466  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:04:32.562481  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHUsername
	I0819 20:04:32.562665  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHPort
	I0819 20:04:32.562660  486208 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/kubernetes-upgrade-382787/id_rsa Username:docker}
	I0819 20:04:32.562807  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHKeyPath
	I0819 20:04:32.562968  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetSSHUsername
	I0819 20:04:32.563095  486208 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/kubernetes-upgrade-382787/id_rsa Username:docker}
	I0819 20:04:32.701372  486208 ssh_runner.go:195] Run: systemctl --version
	I0819 20:04:32.712899  486208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 20:04:33.139526  486208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 20:04:33.151381  486208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 20:04:33.151450  486208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 20:04:33.173579  486208 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 20:04:33.173606  486208 start.go:495] detecting cgroup driver to use...
	I0819 20:04:33.173672  486208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 20:04:33.237089  486208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 20:04:33.344133  486208 docker.go:217] disabling cri-docker service (if available) ...
	I0819 20:04:33.344216  486208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 20:04:33.418198  486208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 20:04:33.460245  486208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 20:04:33.707754  486208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 20:04:33.933358  486208 docker.go:233] disabling docker service ...
	I0819 20:04:33.933447  486208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 20:04:33.971904  486208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 20:04:33.989259  486208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 20:04:34.177444  486208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 20:04:34.394159  486208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 20:04:34.414763  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 20:04:34.438135  486208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 20:04:34.438220  486208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:04:34.471032  486208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 20:04:34.471127  486208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:04:34.523255  486208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:04:34.553275  486208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:04:34.566602  486208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 20:04:34.578206  486208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:04:34.595312  486208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:04:34.620667  486208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:04:34.636658  486208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 20:04:34.651174  486208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 20:04:34.667440  486208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:04:34.842105  486208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 20:06:05.296907  486208 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.454745014s)
	I0819 20:06:05.296951  486208 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 20:06:05.297014  486208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 20:06:05.303545  486208 start.go:563] Will wait 60s for crictl version
	I0819 20:06:05.303608  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:06:05.307562  486208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 20:06:05.346538  486208 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 20:06:05.346623  486208 ssh_runner.go:195] Run: crio --version
	I0819 20:06:05.377586  486208 ssh_runner.go:195] Run: crio --version
	I0819 20:06:05.411590  486208 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 20:06:05.412760  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) Calling .GetIP
	I0819 20:06:05.415762  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:06:05.416163  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:af:a1", ip: ""} in network mk-kubernetes-upgrade-382787: {Iface:virbr2 ExpiryTime:2024-08-19 21:03:52 +0000 UTC Type:0 Mac:52:54:00:a8:af:a1 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-382787 Clientid:01:52:54:00:a8:af:a1}
	I0819 20:06:05.416195  486208 main.go:141] libmachine: (kubernetes-upgrade-382787) DBG | domain kubernetes-upgrade-382787 has defined IP address 192.168.50.10 and MAC address 52:54:00:a8:af:a1 in network mk-kubernetes-upgrade-382787
	I0819 20:06:05.416445  486208 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 20:06:05.420813  486208 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-382787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-382787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 20:06:05.420944  486208 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 20:06:05.420992  486208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 20:06:05.472565  486208 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 20:06:05.472590  486208 crio.go:433] Images already preloaded, skipping extraction
	I0819 20:06:05.472639  486208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 20:06:05.507648  486208 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 20:06:05.507674  486208 cache_images.go:84] Images are preloaded, skipping loading
	I0819 20:06:05.507683  486208 kubeadm.go:934] updating node { 192.168.50.10 8443 v1.31.0 crio true true} ...
	I0819 20:06:05.507827  486208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-382787 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-382787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 20:06:05.507913  486208 ssh_runner.go:195] Run: crio config
	I0819 20:06:05.556136  486208 cni.go:84] Creating CNI manager for ""
	I0819 20:06:05.556158  486208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 20:06:05.556167  486208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 20:06:05.556189  486208 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.10 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-382787 NodeName:kubernetes-upgrade-382787 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 20:06:05.556316  486208 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-382787"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 20:06:05.556376  486208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 20:06:05.566944  486208 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 20:06:05.567039  486208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 20:06:05.578789  486208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0819 20:06:05.598860  486208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 20:06:05.617448  486208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0819 20:06:05.637258  486208 ssh_runner.go:195] Run: grep 192.168.50.10	control-plane.minikube.internal$ /etc/hosts
	I0819 20:06:05.641689  486208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:06:05.785901  486208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 20:06:05.801636  486208 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787 for IP: 192.168.50.10
	I0819 20:06:05.801663  486208 certs.go:194] generating shared ca certs ...
	I0819 20:06:05.801681  486208 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:06:05.801913  486208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 20:06:05.801969  486208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 20:06:05.801980  486208 certs.go:256] generating profile certs ...
	I0819 20:06:05.802078  486208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/client.key
	I0819 20:06:05.802149  486208 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/apiserver.key.87b885b4
	I0819 20:06:05.802193  486208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/proxy-client.key
	I0819 20:06:05.802330  486208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 20:06:05.802364  486208 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 20:06:05.802381  486208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 20:06:05.802415  486208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 20:06:05.802447  486208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 20:06:05.802478  486208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 20:06:05.802538  486208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 20:06:05.803204  486208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 20:06:05.828582  486208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 20:06:05.853477  486208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 20:06:05.877822  486208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 20:06:05.902135  486208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 20:06:05.927078  486208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 20:06:05.953009  486208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 20:06:05.979252  486208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/kubernetes-upgrade-382787/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 20:06:06.095167  486208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 20:06:06.196913  486208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 20:06:06.239599  486208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 20:06:06.367223  486208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 20:06:06.439065  486208 ssh_runner.go:195] Run: openssl version
	I0819 20:06:06.470575  486208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 20:06:06.540214  486208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 20:06:06.549971  486208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 20:06:06.550040  486208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 20:06:06.576508  486208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 20:06:06.601524  486208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 20:06:06.625326  486208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:06:06.636173  486208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:06:06.636262  486208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:06:06.649715  486208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 20:06:06.677289  486208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 20:06:06.718723  486208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 20:06:06.736176  486208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 20:06:06.736256  486208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 20:06:06.746252  486208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 20:06:06.769774  486208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 20:06:06.782747  486208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 20:06:06.795308  486208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 20:06:06.816081  486208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 20:06:06.832000  486208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 20:06:06.854492  486208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 20:06:06.863255  486208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 20:06:06.870310  486208 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-382787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-382787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:06:06.870443  486208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 20:06:06.870512  486208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 20:06:06.910052  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:06:06.910077  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:06:06.910080  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:06:06.910084  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:06:06.910086  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:06:06.910089  486208 cri.go:89] found id: "86bbebdc68520ff506332730765ef6c71c61aa65ba5a758294ce8a209052e443"
	I0819 20:06:06.910092  486208 cri.go:89] found id: "c7dff21be434e649f9f0020da65ca5948fc51b64af6146afabafa10f22a63c2e"
	I0819 20:06:06.910094  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:06:06.910096  486208 cri.go:89] found id: ""
	I0819 20:06:06.910149  486208 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-382787 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-19 20:18:16.538755841 +0000 UTC m=+6114.736033700
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-382787 -n kubernetes-upgrade-382787
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-382787 -n kubernetes-upgrade-382787: exit status 2 (236.584891ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-382787 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-382787 logs -n 25: (2.129893107s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p stopped-upgrade-627417                              | stopped-upgrade-627417    | jenkins | v1.33.1 | 19 Aug 24 20:01 UTC | 19 Aug 24 20:02 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-696812 ssh cat                      | force-systemd-flag-696812 | jenkins | v1.33.1 | 19 Aug 24 20:01 UTC | 19 Aug 24 20:01 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-696812                           | force-systemd-flag-696812 | jenkins | v1.33.1 | 19 Aug 24 20:01 UTC | 19 Aug 24 20:01 UTC |
	| start   | -p old-k8s-version-968990                              | old-k8s-version-968990    | jenkins | v1.33.1 | 19 Aug 24 20:01 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-228973                              | cert-expiration-228973    | jenkins | v1.33.1 | 19 Aug 24 20:01 UTC | 19 Aug 24 20:02 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-627417                              | stopped-upgrade-627417    | jenkins | v1.33.1 | 19 Aug 24 20:02 UTC | 19 Aug 24 20:02 UTC |
	| start   | -p no-preload-944514                                   | no-preload-944514         | jenkins | v1.33.1 | 19 Aug 24 20:02 UTC | 19 Aug 24 20:03 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-228973                              | cert-expiration-228973    | jenkins | v1.33.1 | 19 Aug 24 20:02 UTC | 19 Aug 24 20:03 UTC |
	| start   | -p embed-certs-108534                                  | embed-certs-108534        | jenkins | v1.33.1 | 19 Aug 24 20:03 UTC | 19 Aug 24 20:03 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-382787                           | kubernetes-upgrade-382787 | jenkins | v1.33.1 | 19 Aug 24 20:03 UTC | 19 Aug 24 20:03 UTC |
	| start   | -p kubernetes-upgrade-382787                           | kubernetes-upgrade-382787 | jenkins | v1.33.1 | 19 Aug 24 20:03 UTC | 19 Aug 24 20:04 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-944514             | no-preload-944514         | jenkins | v1.33.1 | 19 Aug 24 20:03 UTC | 19 Aug 24 20:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-944514                                   | no-preload-944514         | jenkins | v1.33.1 | 19 Aug 24 20:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-108534            | embed-certs-108534        | jenkins | v1.33.1 | 19 Aug 24 20:04 UTC | 19 Aug 24 20:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p embed-certs-108534                                  | embed-certs-108534        | jenkins | v1.33.1 | 19 Aug 24 20:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-382787                           | kubernetes-upgrade-382787 | jenkins | v1.33.1 | 19 Aug 24 20:04 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-382787                           | kubernetes-upgrade-382787 | jenkins | v1.33.1 | 19 Aug 24 20:04 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-944514                  | no-preload-944514         | jenkins | v1.33.1 | 19 Aug 24 20:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-944514                                   | no-preload-944514         | jenkins | v1.33.1 | 19 Aug 24 20:06 UTC | 19 Aug 24 20:17 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-968990        | old-k8s-version-968990    | jenkins | v1.33.1 | 19 Aug 24 20:06 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-108534                 | embed-certs-108534        | jenkins | v1.33.1 | 19 Aug 24 20:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p embed-certs-108534                                  | embed-certs-108534        | jenkins | v1.33.1 | 19 Aug 24 20:06 UTC | 19 Aug 24 20:16 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-968990                              | old-k8s-version-968990    | jenkins | v1.33.1 | 19 Aug 24 20:08 UTC | 19 Aug 24 20:08 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-968990             | old-k8s-version-968990    | jenkins | v1.33.1 | 19 Aug 24 20:08 UTC | 19 Aug 24 20:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-968990                              | old-k8s-version-968990    | jenkins | v1.33.1 | 19 Aug 24 20:08 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 20:08:22
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 20:08:22.951096  487755 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:08:22.951390  487755 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:08:22.951399  487755 out.go:358] Setting ErrFile to fd 2...
	I0819 20:08:22.951404  487755 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:08:22.951606  487755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 20:08:22.952205  487755 out.go:352] Setting JSON to false
	I0819 20:08:22.953267  487755 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13854,"bootTime":1724084249,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 20:08:22.953375  487755 start.go:139] virtualization: kvm guest
	I0819 20:08:22.955404  487755 out.go:177] * [old-k8s-version-968990] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 20:08:22.956711  487755 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 20:08:22.956761  487755 notify.go:220] Checking for updates...
	I0819 20:08:22.959028  487755 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 20:08:22.960297  487755 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 20:08:22.961805  487755 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 20:08:22.963073  487755 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 20:08:22.964273  487755 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 20:08:22.965927  487755 config.go:182] Loaded profile config "old-k8s-version-968990": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 20:08:22.966375  487755 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:08:22.966432  487755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:08:22.982128  487755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45351
	I0819 20:08:22.982637  487755 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:08:22.983245  487755 main.go:141] libmachine: Using API Version  1
	I0819 20:08:22.983286  487755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:08:22.983636  487755 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:08:22.983857  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .DriverName
	I0819 20:08:22.985720  487755 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 20:08:22.986932  487755 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 20:08:22.987285  487755 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:08:22.987331  487755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:08:23.003531  487755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0819 20:08:23.004069  487755 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:08:23.004657  487755 main.go:141] libmachine: Using API Version  1
	I0819 20:08:23.004687  487755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:08:23.005042  487755 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:08:23.005275  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .DriverName
	I0819 20:08:23.043405  487755 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 20:08:23.044738  487755 start.go:297] selected driver: kvm2
	I0819 20:08:23.044780  487755 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-968990 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-968990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:08:23.044903  487755 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 20:08:23.045695  487755 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 20:08:23.045815  487755 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 20:08:23.061776  487755 install.go:137] /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0819 20:08:23.062188  487755 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 20:08:23.062259  487755 cni.go:84] Creating CNI manager for ""
	I0819 20:08:23.062273  487755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 20:08:23.062319  487755 start.go:340] cluster config:
	{Name:old-k8s-version-968990 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-968990 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:08:23.062432  487755 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 20:08:23.064154  487755 out.go:177] * Starting "old-k8s-version-968990" primary control-plane node in "old-k8s-version-968990" cluster
	I0819 20:08:19.909423  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:08:22.985424  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:08:20.683321  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:08:20.683363  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:08:20.700206  486208 logs.go:123] Gathering logs for kube-controller-manager [862c833c881c82a052e0432156db8a3769f98f6577d24165a731599c8af8a387] ...
	I0819 20:08:20.700245  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 862c833c881c82a052e0432156db8a3769f98f6577d24165a731599c8af8a387"
	I0819 20:08:20.735829  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:08:20.735869  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:08:20.770839  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:08:20.770879  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:08:21.090521  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:08:21.090568  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:08:21.165585  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:08:21.165608  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:08:21.165628  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:08:21.231222  486208 logs.go:123] Gathering logs for kube-apiserver [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b] ...
	I0819 20:08:21.231271  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:08:21.268793  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:08:21.268842  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:08:21.309535  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:08:21.309577  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:08:23.845826  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:08:23.846526  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:08:23.846583  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:08:23.846651  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:08:23.880924  486208 cri.go:89] found id: "3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:08:23.880951  486208 cri.go:89] found id: ""
	I0819 20:08:23.880962  486208 logs.go:276] 1 containers: [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b]
	I0819 20:08:23.881027  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:23.884999  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:08:23.885076  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:08:23.918338  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:08:23.918366  486208 cri.go:89] found id: ""
	I0819 20:08:23.918375  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:08:23.918442  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:23.922595  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:08:23.922671  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:08:23.960865  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:08:23.960892  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:08:23.960896  486208 cri.go:89] found id: ""
	I0819 20:08:23.960903  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:08:23.960956  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:23.965272  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:23.969243  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:08:23.969311  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:08:24.002615  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:08:24.002641  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:08:24.002645  486208 cri.go:89] found id: ""
	I0819 20:08:24.002653  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:08:24.002707  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:24.006982  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:24.010939  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:08:24.011022  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:08:24.048876  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:08:24.048910  486208 cri.go:89] found id: ""
	I0819 20:08:24.048919  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:08:24.048978  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:24.052965  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:08:24.053051  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:08:24.087045  486208 cri.go:89] found id: "862c833c881c82a052e0432156db8a3769f98f6577d24165a731599c8af8a387"
	I0819 20:08:24.087074  486208 cri.go:89] found id: ""
	I0819 20:08:24.087084  486208 logs.go:276] 1 containers: [862c833c881c82a052e0432156db8a3769f98f6577d24165a731599c8af8a387]
	I0819 20:08:24.087152  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:24.091079  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:08:24.091155  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:08:24.131862  486208 cri.go:89] found id: ""
	I0819 20:08:24.131897  486208 logs.go:276] 0 containers: []
	W0819 20:08:24.131906  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:08:24.131914  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:08:24.131978  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:08:24.169892  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:08:24.169918  486208 cri.go:89] found id: ""
	I0819 20:08:24.169928  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:08:24.169997  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:24.173751  486208 logs.go:123] Gathering logs for kube-apiserver [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b] ...
	I0819 20:08:24.173774  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:08:24.212581  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:08:24.212614  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:08:24.278766  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:08:24.278811  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:08:24.315231  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:08:24.315261  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:08:24.351441  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:08:24.351471  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:08:24.687613  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:08:24.687678  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:08:24.752365  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:08:24.752396  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:08:24.752414  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:08:24.794510  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:08:24.794546  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:08:24.830402  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:08:24.830437  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:08:24.865102  486208 logs.go:123] Gathering logs for kube-controller-manager [862c833c881c82a052e0432156db8a3769f98f6577d24165a731599c8af8a387] ...
	I0819 20:08:24.865167  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 862c833c881c82a052e0432156db8a3769f98f6577d24165a731599c8af8a387"
	I0819 20:08:24.901510  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:08:24.901551  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:08:25.010885  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:08:25.010933  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:08:25.025425  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:08:25.025460  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:08:25.062140  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:08:25.062173  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:08:23.065331  487755 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 20:08:23.065384  487755 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 20:08:23.065397  487755 cache.go:56] Caching tarball of preloaded images
	I0819 20:08:23.065515  487755 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 20:08:23.065529  487755 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 20:08:23.065640  487755 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/old-k8s-version-968990/config.json ...
	I0819 20:08:23.065880  487755 start.go:360] acquireMachinesLock for old-k8s-version-968990: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 20:08:29.061477  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:08:27.604495  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:08:27.605184  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:08:27.605252  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:08:27.605312  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:08:27.656814  486208 cri.go:89] found id: "3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:08:27.656843  486208 cri.go:89] found id: ""
	I0819 20:08:27.656852  486208 logs.go:276] 1 containers: [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b]
	I0819 20:08:27.656917  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:27.665502  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:08:27.665588  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:08:27.704222  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:08:27.704251  486208 cri.go:89] found id: ""
	I0819 20:08:27.704259  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:08:27.704313  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:27.708316  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:08:27.708398  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:08:27.758026  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:08:27.758053  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:08:27.758057  486208 cri.go:89] found id: ""
	I0819 20:08:27.758066  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:08:27.758135  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:27.762078  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:27.765806  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:08:27.765872  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:08:27.803815  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:08:27.803846  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:08:27.803850  486208 cri.go:89] found id: ""
	I0819 20:08:27.803860  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:08:27.803916  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:27.808334  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:27.812209  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:08:27.812280  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:08:27.857410  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:08:27.857436  486208 cri.go:89] found id: ""
	I0819 20:08:27.857445  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:08:27.857506  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:27.861500  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:08:27.861573  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:08:27.911734  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:08:27.911758  486208 cri.go:89] found id: "862c833c881c82a052e0432156db8a3769f98f6577d24165a731599c8af8a387"
	I0819 20:08:27.911762  486208 cri.go:89] found id: ""
	I0819 20:08:27.911775  486208 logs.go:276] 2 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74 862c833c881c82a052e0432156db8a3769f98f6577d24165a731599c8af8a387]
	I0819 20:08:27.911826  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:27.916253  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:27.920299  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:08:27.920387  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:08:27.970113  486208 cri.go:89] found id: ""
	I0819 20:08:27.970152  486208 logs.go:276] 0 containers: []
	W0819 20:08:27.970167  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:08:27.970175  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:08:27.970249  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:08:28.023903  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:08:28.023935  486208 cri.go:89] found id: ""
	I0819 20:08:28.023945  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:08:28.024006  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:28.028190  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:08:28.028224  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:08:28.119442  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:08:28.119487  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:08:28.164501  486208 logs.go:123] Gathering logs for kube-controller-manager [862c833c881c82a052e0432156db8a3769f98f6577d24165a731599c8af8a387] ...
	I0819 20:08:28.164538  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 862c833c881c82a052e0432156db8a3769f98f6577d24165a731599c8af8a387"
	I0819 20:08:28.208273  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:08:28.208308  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:08:28.228141  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:08:28.228186  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:08:28.270846  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:08:28.270886  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:08:28.322168  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:08:28.322207  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:08:28.358693  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:08:28.358727  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:08:28.670369  486208 logs.go:123] Gathering logs for kube-apiserver [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b] ...
	I0819 20:08:28.670412  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:08:28.707960  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:08:28.707989  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:08:28.747623  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:08:28.747653  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:08:28.855083  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:08:28.855132  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:08:28.924724  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:08:28.924748  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:08:28.924765  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:08:28.961353  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:08:28.961391  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:08:28.997788  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:08:28.997820  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:08:32.133578  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:08:31.532633  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:08:31.533357  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:08:31.533416  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:08:31.533472  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:08:31.569216  486208 cri.go:89] found id: "3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:08:31.569246  486208 cri.go:89] found id: ""
	I0819 20:08:31.569254  486208 logs.go:276] 1 containers: [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b]
	I0819 20:08:31.569307  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:31.575102  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:08:31.575181  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:08:31.611252  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:08:31.611280  486208 cri.go:89] found id: ""
	I0819 20:08:31.611291  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:08:31.611352  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:31.615354  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:08:31.615427  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:08:31.655260  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:08:31.655289  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:08:31.655295  486208 cri.go:89] found id: ""
	I0819 20:08:31.655305  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:08:31.655414  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:31.659598  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:31.663482  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:08:31.663562  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:08:31.698936  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:08:31.698967  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:08:31.698973  486208 cri.go:89] found id: ""
	I0819 20:08:31.698982  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:08:31.699049  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:31.703191  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:31.707078  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:08:31.707164  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:08:31.743019  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:08:31.743052  486208 cri.go:89] found id: ""
	I0819 20:08:31.743061  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:08:31.743122  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:31.747213  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:08:31.747303  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:08:31.782223  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:08:31.782257  486208 cri.go:89] found id: "862c833c881c82a052e0432156db8a3769f98f6577d24165a731599c8af8a387"
	I0819 20:08:31.782263  486208 cri.go:89] found id: ""
	I0819 20:08:31.782272  486208 logs.go:276] 2 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74 862c833c881c82a052e0432156db8a3769f98f6577d24165a731599c8af8a387]
	I0819 20:08:31.782342  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:31.786649  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:31.790473  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:08:31.790551  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:08:31.825445  486208 cri.go:89] found id: ""
	I0819 20:08:31.825475  486208 logs.go:276] 0 containers: []
	W0819 20:08:31.825487  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:08:31.825512  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:08:31.825580  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:08:31.860470  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:08:31.860500  486208 cri.go:89] found id: ""
	I0819 20:08:31.860510  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:08:31.860579  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:31.864580  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:08:31.864613  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:08:31.931276  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:08:31.931302  486208 logs.go:123] Gathering logs for kube-apiserver [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b] ...
	I0819 20:08:31.931321  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:08:31.968997  486208 logs.go:123] Gathering logs for kube-controller-manager [862c833c881c82a052e0432156db8a3769f98f6577d24165a731599c8af8a387] ...
	I0819 20:08:31.969032  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 862c833c881c82a052e0432156db8a3769f98f6577d24165a731599c8af8a387"
	I0819 20:08:32.004492  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:08:32.004524  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:08:32.311632  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:08:32.311695  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:08:32.354624  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:08:32.354657  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:08:32.465420  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:08:32.465463  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:08:32.508121  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:08:32.508163  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:08:32.542810  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:08:32.542856  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:08:32.608162  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:08:32.608212  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:08:32.647413  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:08:32.647448  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:08:32.661967  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:08:32.662006  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:08:32.696227  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:08:32.696263  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:08:32.729583  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:08:32.729618  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:08:32.764022  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:08:32.764064  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:08:35.310323  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:08:35.311056  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:08:35.311129  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:08:35.311192  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:08:35.346956  486208 cri.go:89] found id: "3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:08:35.346992  486208 cri.go:89] found id: ""
	I0819 20:08:35.347003  486208 logs.go:276] 1 containers: [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b]
	I0819 20:08:35.347062  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:35.351009  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:08:35.351090  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:08:35.385729  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:08:35.385755  486208 cri.go:89] found id: ""
	I0819 20:08:35.385763  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:08:35.385832  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:35.389709  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:08:35.389796  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:08:35.425645  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:08:35.425676  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:08:35.425682  486208 cri.go:89] found id: ""
	I0819 20:08:35.425691  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:08:35.425755  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:35.429714  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:35.433539  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:08:35.433638  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:08:35.467915  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:08:35.467944  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:08:35.467949  486208 cri.go:89] found id: ""
	I0819 20:08:35.467956  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:08:35.468010  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:35.472413  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:35.476316  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:08:35.476392  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:08:35.511581  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:08:35.511612  486208 cri.go:89] found id: ""
	I0819 20:08:35.511621  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:08:35.511672  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:35.515800  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:08:35.515868  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:08:35.549641  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:08:35.549673  486208 cri.go:89] found id: "862c833c881c82a052e0432156db8a3769f98f6577d24165a731599c8af8a387"
	I0819 20:08:35.549680  486208 cri.go:89] found id: ""
	I0819 20:08:35.549692  486208 logs.go:276] 2 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74 862c833c881c82a052e0432156db8a3769f98f6577d24165a731599c8af8a387]
	I0819 20:08:35.549757  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:35.553645  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:35.557385  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:08:35.557450  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:08:38.213495  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:08:35.591635  486208 cri.go:89] found id: ""
	I0819 20:08:35.591666  486208 logs.go:276] 0 containers: []
	W0819 20:08:35.591678  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:08:35.591686  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:08:35.591754  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:08:35.626477  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:08:35.626504  486208 cri.go:89] found id: ""
	I0819 20:08:35.626513  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:08:35.626569  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:35.630409  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:08:35.630438  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:08:35.664573  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:08:35.664608  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:08:35.738209  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:08:35.738266  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:08:35.772366  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:08:35.772404  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:08:35.880766  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:08:35.880818  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:08:35.894580  486208 logs.go:123] Gathering logs for kube-apiserver [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b] ...
	I0819 20:08:35.894613  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:08:35.935188  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:08:35.935224  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:08:35.970538  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:08:35.970570  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:08:36.013621  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:08:36.013658  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:08:36.047971  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:08:36.048002  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:08:36.114258  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:08:36.114283  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:08:36.114298  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:08:36.149939  486208 logs.go:123] Gathering logs for kube-controller-manager [862c833c881c82a052e0432156db8a3769f98f6577d24165a731599c8af8a387] ...
	I0819 20:08:36.149983  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 862c833c881c82a052e0432156db8a3769f98f6577d24165a731599c8af8a387"
	I0819 20:08:36.185960  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:08:36.185995  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:08:36.219469  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:08:36.219503  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:08:36.543058  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:08:36.543099  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:08:39.094229  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:08:39.095021  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:08:39.095086  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:08:39.095141  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:08:39.129602  486208 cri.go:89] found id: "3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:08:39.129627  486208 cri.go:89] found id: ""
	I0819 20:08:39.129638  486208 logs.go:276] 1 containers: [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b]
	I0819 20:08:39.129709  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:39.133763  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:08:39.133829  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:08:39.168528  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:08:39.168558  486208 cri.go:89] found id: ""
	I0819 20:08:39.168568  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:08:39.168632  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:39.172720  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:08:39.172809  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:08:39.225318  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:08:39.225353  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:08:39.225359  486208 cri.go:89] found id: ""
	I0819 20:08:39.225369  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:08:39.225434  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:39.232921  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:39.236885  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:08:39.236971  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:08:39.272930  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:08:39.272955  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:08:39.272960  486208 cri.go:89] found id: ""
	I0819 20:08:39.272967  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:08:39.273017  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:39.277375  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:39.281312  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:08:39.281383  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:08:39.316589  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:08:39.316623  486208 cri.go:89] found id: ""
	I0819 20:08:39.316633  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:08:39.316698  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:39.320710  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:08:39.320788  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:08:39.357012  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:08:39.357037  486208 cri.go:89] found id: ""
	I0819 20:08:39.357046  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:08:39.357105  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:39.361164  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:08:39.361257  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:08:39.396403  486208 cri.go:89] found id: ""
	I0819 20:08:39.396432  486208 logs.go:276] 0 containers: []
	W0819 20:08:39.396443  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:08:39.396451  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:08:39.396520  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:08:39.431014  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:08:39.431048  486208 cri.go:89] found id: ""
	I0819 20:08:39.431058  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:08:39.431128  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:39.435188  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:08:39.435220  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:08:39.470055  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:08:39.470096  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:08:39.513257  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:08:39.513300  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:08:39.548208  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:08:39.548244  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:08:39.583491  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:08:39.583524  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:08:39.653795  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:08:39.653833  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:08:39.686634  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:08:39.686679  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:08:39.720146  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:08:39.720182  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:08:39.766818  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:08:39.766850  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:08:39.876016  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:08:39.876059  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:08:39.890143  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:08:39.890175  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:08:39.957620  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:08:39.957646  486208 logs.go:123] Gathering logs for kube-apiserver [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b] ...
	I0819 20:08:39.957662  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:08:39.996543  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:08:39.996584  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:08:40.030317  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:08:40.030351  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:08:41.285420  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:08:42.837009  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:08:42.837761  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:08:42.837832  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:08:42.837896  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:08:42.874703  486208 cri.go:89] found id: "3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:08:42.874736  486208 cri.go:89] found id: ""
	I0819 20:08:42.874746  486208 logs.go:276] 1 containers: [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b]
	I0819 20:08:42.874822  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:42.879868  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:08:42.879951  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:08:42.914584  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:08:42.914615  486208 cri.go:89] found id: ""
	I0819 20:08:42.914623  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:08:42.914683  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:42.919164  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:08:42.919236  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:08:42.959402  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:08:42.959427  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:08:42.959431  486208 cri.go:89] found id: ""
	I0819 20:08:42.959438  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:08:42.959491  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:42.963698  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:42.968146  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:08:42.968222  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:08:43.008832  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:08:43.008861  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:08:43.008867  486208 cri.go:89] found id: ""
	I0819 20:08:43.008877  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:08:43.008947  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:43.013970  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:43.018055  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:08:43.018131  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:08:43.054254  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:08:43.054293  486208 cri.go:89] found id: ""
	I0819 20:08:43.054304  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:08:43.054379  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:43.058557  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:08:43.058629  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:08:43.092070  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:08:43.092097  486208 cri.go:89] found id: ""
	I0819 20:08:43.092106  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:08:43.092157  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:43.096191  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:08:43.096257  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:08:43.135349  486208 cri.go:89] found id: ""
	I0819 20:08:43.135376  486208 logs.go:276] 0 containers: []
	W0819 20:08:43.135385  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:08:43.135391  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:08:43.135444  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:08:43.171280  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:08:43.171306  486208 cri.go:89] found id: ""
	I0819 20:08:43.171315  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:08:43.171374  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:43.175772  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:08:43.175797  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:08:43.210485  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:08:43.210521  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:08:43.247076  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:08:43.247115  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:08:43.282398  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:08:43.282433  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:08:43.318004  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:08:43.318034  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:08:43.635977  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:08:43.636023  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:08:43.681377  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:08:43.681420  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:08:43.781385  486208 logs.go:123] Gathering logs for kube-apiserver [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b] ...
	I0819 20:08:43.781434  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:08:43.818715  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:08:43.818758  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:08:43.854118  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:08:43.854154  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:08:43.896433  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:08:43.896471  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:08:43.970099  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:08:43.970143  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:08:44.005657  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:08:44.005686  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:08:44.019201  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:08:44.019236  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:08:44.087941  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:08:47.365421  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:08:46.588954  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:08:46.589720  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:08:46.589772  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:08:46.589820  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:08:46.624309  486208 cri.go:89] found id: "3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:08:46.624340  486208 cri.go:89] found id: ""
	I0819 20:08:46.624349  486208 logs.go:276] 1 containers: [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b]
	I0819 20:08:46.624406  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:46.628405  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:08:46.628506  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:08:46.663079  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:08:46.663102  486208 cri.go:89] found id: ""
	I0819 20:08:46.663109  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:08:46.663158  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:46.667239  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:08:46.667313  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:08:46.702864  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:08:46.702899  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:08:46.702904  486208 cri.go:89] found id: ""
	I0819 20:08:46.702911  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:08:46.702968  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:46.706956  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:46.710786  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:08:46.710854  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:08:46.745629  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:08:46.745652  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:08:46.745656  486208 cri.go:89] found id: ""
	I0819 20:08:46.745664  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:08:46.745715  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:46.749639  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:46.753370  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:08:46.753446  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:08:46.794587  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:08:46.794613  486208 cri.go:89] found id: ""
	I0819 20:08:46.794621  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:08:46.794676  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:46.798730  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:08:46.798816  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:08:46.833191  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:08:46.833219  486208 cri.go:89] found id: ""
	I0819 20:08:46.833227  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:08:46.833280  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:46.837344  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:08:46.837410  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:08:46.872244  486208 cri.go:89] found id: ""
	I0819 20:08:46.872276  486208 logs.go:276] 0 containers: []
	W0819 20:08:46.872285  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:08:46.872292  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:08:46.872348  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:08:46.907560  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:08:46.907591  486208 cri.go:89] found id: ""
	I0819 20:08:46.907607  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:08:46.907660  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:46.911699  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:08:46.911727  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:08:46.956451  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:08:46.956499  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:08:46.993574  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:08:46.993606  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:08:47.031817  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:08:47.031853  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:08:47.099175  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:08:47.099210  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:08:47.099229  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:08:47.175407  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:08:47.175451  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:08:47.217682  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:08:47.217719  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:08:47.257960  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:08:47.257996  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:08:47.359094  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:08:47.359146  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:08:47.373296  486208 logs.go:123] Gathering logs for kube-apiserver [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b] ...
	I0819 20:08:47.373336  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:08:47.413543  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:08:47.413586  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:08:47.742772  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:08:47.742834  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:08:47.779945  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:08:47.779980  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:08:47.819247  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:08:47.819278  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:08:50.354738  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:08:50.355520  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:08:50.355584  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:08:50.355635  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:08:50.391110  486208 cri.go:89] found id: "3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:08:50.391143  486208 cri.go:89] found id: ""
	I0819 20:08:50.391153  486208 logs.go:276] 1 containers: [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b]
	I0819 20:08:50.391240  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:50.395707  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:08:50.395787  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:08:50.436920  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:08:50.436949  486208 cri.go:89] found id: ""
	I0819 20:08:50.436957  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:08:50.437016  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:50.441518  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:08:50.441601  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:08:50.483916  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:08:50.483939  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:08:50.483943  486208 cri.go:89] found id: ""
	I0819 20:08:50.483950  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:08:50.484001  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:50.488222  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:50.492208  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:08:50.492284  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:08:50.529584  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:08:50.529614  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:08:50.529618  486208 cri.go:89] found id: ""
	I0819 20:08:50.529625  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:08:50.529677  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:50.533883  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:50.538090  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:08:50.538171  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:08:50.580103  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:08:50.580131  486208 cri.go:89] found id: ""
	I0819 20:08:50.580142  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:08:50.580209  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:50.584344  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:08:50.584425  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:08:50.437349  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:08:50.621356  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:08:50.621385  486208 cri.go:89] found id: ""
	I0819 20:08:50.621392  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:08:50.621444  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:50.625565  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:08:50.625633  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:08:50.664881  486208 cri.go:89] found id: ""
	I0819 20:08:50.664923  486208 logs.go:276] 0 containers: []
	W0819 20:08:50.664936  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:08:50.664943  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:08:50.665016  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:08:50.704504  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:08:50.704529  486208 cri.go:89] found id: ""
	I0819 20:08:50.704543  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:08:50.704608  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:50.708769  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:08:50.708803  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:08:50.722333  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:08:50.722368  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:08:50.763988  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:08:50.764022  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:08:50.833315  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:08:50.833358  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:08:50.867667  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:08:50.867726  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:08:50.969290  486208 logs.go:123] Gathering logs for kube-apiserver [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b] ...
	I0819 20:08:50.969335  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:08:51.006835  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:08:51.006872  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:08:51.048429  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:08:51.048458  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:08:51.084077  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:08:51.084110  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:08:51.131627  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:08:51.131658  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:08:51.443334  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:08:51.443399  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:08:51.514316  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:08:51.514348  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:08:51.514368  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:08:51.559377  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:08:51.559418  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:08:51.595281  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:08:51.595316  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:08:54.129428  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:08:54.130193  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:08:54.130279  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:08:54.130347  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:08:54.165853  486208 cri.go:89] found id: "3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:08:54.165883  486208 cri.go:89] found id: ""
	I0819 20:08:54.165892  486208 logs.go:276] 1 containers: [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b]
	I0819 20:08:54.165961  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:54.170196  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:08:54.170279  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:08:54.207072  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:08:54.207099  486208 cri.go:89] found id: ""
	I0819 20:08:54.207109  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:08:54.207182  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:54.211559  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:08:54.211648  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:08:54.245909  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:08:54.245941  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:08:54.245946  486208 cri.go:89] found id: ""
	I0819 20:08:54.245955  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:08:54.246033  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:54.251563  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:54.255766  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:08:54.255843  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:08:54.295224  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:08:54.295249  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:08:54.295254  486208 cri.go:89] found id: ""
	I0819 20:08:54.295261  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:08:54.295317  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:54.299459  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:54.303457  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:08:54.303546  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:08:54.338070  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:08:54.338108  486208 cri.go:89] found id: ""
	I0819 20:08:54.338120  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:08:54.338181  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:54.342242  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:08:54.342326  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:08:54.384618  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:08:54.384649  486208 cri.go:89] found id: ""
	I0819 20:08:54.384658  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:08:54.384714  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:54.388917  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:08:54.388978  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:08:54.422932  486208 cri.go:89] found id: ""
	I0819 20:08:54.422962  486208 logs.go:276] 0 containers: []
	W0819 20:08:54.422970  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:08:54.422978  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:08:54.423040  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:08:54.456656  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:08:54.456681  486208 cri.go:89] found id: ""
	I0819 20:08:54.456689  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:08:54.456741  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:08:54.460664  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:08:54.460694  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:08:54.495272  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:08:54.495305  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:08:54.529178  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:08:54.529216  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:08:54.846831  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:08:54.846875  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:08:54.887524  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:08:54.887560  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:08:54.954420  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:08:54.954448  486208 logs.go:123] Gathering logs for kube-apiserver [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b] ...
	I0819 20:08:54.954466  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:08:54.992704  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:08:54.992741  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:08:55.062875  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:08:55.062919  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:08:55.077241  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:08:55.077278  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:08:55.181765  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:08:55.181811  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:08:55.217161  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:08:55.217216  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:08:55.252402  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:08:55.252442  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:08:55.293954  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:08:55.293998  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:08:55.330244  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:08:55.330284  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:08:56.517446  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:08:59.589444  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:08:57.866589  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:09:02.867053  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 20:09:02.867133  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:09:02.867202  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:09:02.902515  486208 cri.go:89] found id: "ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:02.902549  486208 cri.go:89] found id: "3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:09:02.902555  486208 cri.go:89] found id: ""
	I0819 20:09:02.902565  486208 logs.go:276] 2 containers: [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b]
	I0819 20:09:02.902633  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:02.907063  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:02.911148  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:09:02.911214  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:09:02.947415  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:02.947439  486208 cri.go:89] found id: ""
	I0819 20:09:02.947447  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:09:02.947506  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:02.951678  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:09:02.951771  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:09:02.985739  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:02.985768  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:02.985774  486208 cri.go:89] found id: ""
	I0819 20:09:02.985783  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:09:02.985856  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:02.991138  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:02.995122  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:09:02.995207  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:09:03.029893  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:03.029921  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:03.029927  486208 cri.go:89] found id: ""
	I0819 20:09:03.029937  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:09:03.030007  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:03.034297  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:03.038799  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:09:03.038885  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:09:03.079982  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:03.080011  486208 cri.go:89] found id: ""
	I0819 20:09:03.080021  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:09:03.080092  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:03.084185  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:09:03.084262  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:09:03.119132  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:03.119162  486208 cri.go:89] found id: ""
	I0819 20:09:03.119172  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:09:03.119240  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:03.123158  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:09:03.123241  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:09:03.158128  486208 cri.go:89] found id: ""
	I0819 20:09:03.158155  486208 logs.go:276] 0 containers: []
	W0819 20:09:03.158164  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:09:03.158170  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:09:03.158226  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:09:03.194377  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:03.194406  486208 cri.go:89] found id: ""
	I0819 20:09:03.194416  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:09:03.194475  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:03.198498  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:09:03.198523  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:09:03.250047  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:09:03.250086  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 20:09:05.669487  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:09:08.741487  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:09:14.821508  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:09:13.320774  486208 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.070649773s)
	W0819 20:09:13.320831  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0819 20:09:13.320842  486208 logs.go:123] Gathering logs for kube-apiserver [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b] ...
	I0819 20:09:13.320854  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:09:13.358255  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:09:13.358293  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:13.392409  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:09:13.392447  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:13.427484  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:09:13.427520  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:13.463528  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:09:13.463570  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:13.501994  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:09:13.502041  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:13.536771  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:09:13.536811  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:09:13.932686  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:09:13.932739  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:09:14.030438  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:09:14.030483  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:09:14.045311  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:09:14.045351  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:14.114667  486208 logs.go:123] Gathering logs for kube-apiserver [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5] ...
	I0819 20:09:14.114711  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:14.151357  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:09:14.151391  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:14.184555  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:09:14.184591  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:17.893420  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:09:16.719628  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:09:16.891406  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": read tcp 192.168.50.1:54794->192.168.50.10:8443: read: connection reset by peer
	I0819 20:09:16.891488  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:09:16.891573  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:09:16.934530  486208 cri.go:89] found id: "ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:16.934556  486208 cri.go:89] found id: "3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	I0819 20:09:16.934561  486208 cri.go:89] found id: ""
	I0819 20:09:16.934568  486208 logs.go:276] 2 containers: [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b]
	I0819 20:09:16.934620  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:16.938965  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:16.943005  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:09:16.943088  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:09:16.977943  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:16.977971  486208 cri.go:89] found id: ""
	I0819 20:09:16.977978  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:09:16.978035  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:16.982053  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:09:16.982139  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:09:17.022815  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:17.022846  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:17.022851  486208 cri.go:89] found id: ""
	I0819 20:09:17.022859  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:09:17.022927  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:17.028446  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:17.032407  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:09:17.032498  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:09:17.065572  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:17.065600  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:17.065605  486208 cri.go:89] found id: ""
	I0819 20:09:17.065614  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:09:17.065682  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:17.069533  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:17.073363  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:09:17.073441  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:09:17.107650  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:17.107679  486208 cri.go:89] found id: ""
	I0819 20:09:17.107689  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:09:17.107744  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:17.111985  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:09:17.112056  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:09:17.150740  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:17.150768  486208 cri.go:89] found id: ""
	I0819 20:09:17.150776  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:09:17.150827  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:17.154840  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:09:17.154911  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:09:17.189928  486208 cri.go:89] found id: ""
	I0819 20:09:17.189957  486208 logs.go:276] 0 containers: []
	W0819 20:09:17.189966  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:09:17.189972  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:09:17.190025  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:09:17.225291  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:17.225321  486208 cri.go:89] found id: ""
	I0819 20:09:17.225329  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:09:17.225394  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:17.229326  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:09:17.229353  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:09:17.337801  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:09:17.337837  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:09:17.411988  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:09:17.412018  486208 logs.go:123] Gathering logs for kube-apiserver [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5] ...
	I0819 20:09:17.412035  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:17.451445  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:09:17.451481  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:17.486819  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:09:17.486857  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:17.521841  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:09:17.521873  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:09:17.928673  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:09:17.928723  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:09:17.943131  486208 logs.go:123] Gathering logs for kube-apiserver [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b] ...
	I0819 20:09:17.943165  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	W0819 20:09:17.981608  486208 logs.go:130] failed kube-apiserver [3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b": Process exited with status 1
	stdout:
	
	stderr:
	E0819 20:09:17.955704    7687 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b\": container with ID starting with 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b not found: ID does not exist" containerID="3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	time="2024-08-19T20:09:17Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b\": container with ID starting with 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b not found: ID does not exist"
	 output: 
	** stderr ** 
	E0819 20:09:17.955704    7687 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b\": container with ID starting with 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b not found: ID does not exist" containerID="3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b"
	time="2024-08-19T20:09:17Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b\": container with ID starting with 3c978552fdcb0afa3abbe2c3db78ebed92953604e23c9e5d7b24009d42bbb72b not found: ID does not exist"
	
	** /stderr **
	I0819 20:09:17.981641  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:09:17.981659  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:18.019573  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:09:18.019608  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:18.056275  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:09:18.056313  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:18.099369  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:09:18.099406  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:18.134637  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:09:18.134671  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:18.171057  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:09:18.171090  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:09:18.217221  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:09:18.217257  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:23.973447  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:09:20.797883  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:09:20.798598  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:09:20.798664  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:09:20.798722  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:09:20.835384  486208 cri.go:89] found id: "ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:20.835412  486208 cri.go:89] found id: ""
	I0819 20:09:20.835422  486208 logs.go:276] 1 containers: [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5]
	I0819 20:09:20.835487  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:20.839831  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:09:20.839915  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:09:20.875533  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:20.875564  486208 cri.go:89] found id: ""
	I0819 20:09:20.875575  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:09:20.875640  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:20.879750  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:09:20.879847  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:09:20.914319  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:20.914345  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:20.914350  486208 cri.go:89] found id: ""
	I0819 20:09:20.914358  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:09:20.914417  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:20.918809  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:20.922826  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:09:20.922894  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:09:20.962747  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:20.962773  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:20.962777  486208 cri.go:89] found id: ""
	I0819 20:09:20.962785  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:09:20.962847  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:20.967012  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:20.971385  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:09:20.971454  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:09:21.006495  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:21.006523  486208 cri.go:89] found id: ""
	I0819 20:09:21.006533  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:09:21.006593  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:21.010882  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:09:21.010962  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:09:21.046101  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:21.046131  486208 cri.go:89] found id: ""
	I0819 20:09:21.046140  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:09:21.046207  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:21.050446  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:09:21.050528  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:09:21.092307  486208 cri.go:89] found id: ""
	I0819 20:09:21.092336  486208 logs.go:276] 0 containers: []
	W0819 20:09:21.092350  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:09:21.092357  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:09:21.092415  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:09:21.129895  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:21.129922  486208 cri.go:89] found id: ""
	I0819 20:09:21.129931  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:09:21.129984  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:21.133957  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:09:21.133985  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:09:21.147811  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:09:21.147843  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:09:21.213584  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:09:21.213613  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:09:21.213627  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:21.250971  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:09:21.251012  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:09:21.290900  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:09:21.290935  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:21.326452  486208 logs.go:123] Gathering logs for kube-apiserver [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5] ...
	I0819 20:09:21.326484  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:21.366448  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:09:21.366483  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:21.416487  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:09:21.416524  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:21.453637  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:09:21.453684  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:21.489163  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:09:21.489195  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:21.522486  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:09:21.522526  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:21.555832  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:09:21.555872  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:09:21.909374  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:09:21.909424  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:09:22.019121  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:09:22.019173  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:24.602018  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:09:24.602798  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:09:24.602856  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:09:24.602921  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:09:24.636989  486208 cri.go:89] found id: "ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:24.637016  486208 cri.go:89] found id: ""
	I0819 20:09:24.637025  486208 logs.go:276] 1 containers: [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5]
	I0819 20:09:24.637081  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:24.641082  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:09:24.641174  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:09:24.675699  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:24.675724  486208 cri.go:89] found id: ""
	I0819 20:09:24.675738  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:09:24.675792  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:24.679987  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:09:24.680083  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:09:24.722157  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:24.722190  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:24.722195  486208 cri.go:89] found id: ""
	I0819 20:09:24.722205  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:09:24.722274  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:24.726327  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:24.730202  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:09:24.730268  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:09:24.765542  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:24.765567  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:24.765571  486208 cri.go:89] found id: ""
	I0819 20:09:24.765578  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:09:24.765630  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:24.769581  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:24.773299  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:09:24.773374  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:09:24.813308  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:24.813335  486208 cri.go:89] found id: ""
	I0819 20:09:24.813344  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:09:24.813399  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:24.817386  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:09:24.817454  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:09:24.859269  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:24.859296  486208 cri.go:89] found id: ""
	I0819 20:09:24.859304  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:09:24.859357  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:24.863543  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:09:24.863616  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:09:24.904121  486208 cri.go:89] found id: ""
	I0819 20:09:24.904156  486208 logs.go:276] 0 containers: []
	W0819 20:09:24.904165  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:09:24.904171  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:09:24.904235  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:09:24.945420  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:24.945447  486208 cri.go:89] found id: ""
	I0819 20:09:24.945455  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:09:24.945504  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:24.950532  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:09:24.950565  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:09:25.063816  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:09:25.063865  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:09:25.145479  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:09:25.145508  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:09:25.145526  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:25.187491  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:09:25.187530  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:25.226566  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:09:25.226602  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:09:25.535523  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:09:25.535572  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:09:25.580514  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:09:25.580553  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:27.045485  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:09:25.629228  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:09:25.629266  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:25.667910  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:09:25.667946  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:25.746839  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:09:25.746884  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:25.782952  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:09:25.782987  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:25.816746  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:09:25.816781  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:09:25.830165  486208 logs.go:123] Gathering logs for kube-apiserver [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5] ...
	I0819 20:09:25.830195  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:25.870726  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:09:25.870757  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:28.405230  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:09:28.405908  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:09:28.405964  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:09:28.406017  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:09:28.441988  486208 cri.go:89] found id: "ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:28.442017  486208 cri.go:89] found id: ""
	I0819 20:09:28.442026  486208 logs.go:276] 1 containers: [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5]
	I0819 20:09:28.442081  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:28.446188  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:09:28.446262  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:09:28.481656  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:28.481681  486208 cri.go:89] found id: ""
	I0819 20:09:28.481689  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:09:28.481746  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:28.485790  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:09:28.485859  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:09:28.520295  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:28.520327  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:28.520331  486208 cri.go:89] found id: ""
	I0819 20:09:28.520341  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:09:28.520403  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:28.524548  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:28.528303  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:09:28.528370  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:09:28.572873  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:28.572906  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:28.572912  486208 cri.go:89] found id: ""
	I0819 20:09:28.572923  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:09:28.572988  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:28.577076  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:28.580929  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:09:28.581004  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:09:28.619154  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:28.619191  486208 cri.go:89] found id: ""
	I0819 20:09:28.619202  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:09:28.619272  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:28.623239  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:09:28.623329  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:09:28.657925  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:28.657958  486208 cri.go:89] found id: ""
	I0819 20:09:28.657967  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:09:28.658033  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:28.661901  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:09:28.661992  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:09:28.700141  486208 cri.go:89] found id: ""
	I0819 20:09:28.700184  486208 logs.go:276] 0 containers: []
	W0819 20:09:28.700196  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:09:28.700204  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:09:28.700285  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:09:28.736822  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:28.736854  486208 cri.go:89] found id: ""
	I0819 20:09:28.736871  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:09:28.736933  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:28.741066  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:09:28.741098  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:28.775395  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:09:28.775433  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:28.813069  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:09:28.813105  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:09:28.851362  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:09:28.851402  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:28.894263  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:09:28.894306  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:28.968418  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:09:28.968465  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:29.001678  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:09:29.001715  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:29.034045  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:09:29.034078  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:29.068983  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:09:29.069015  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:09:29.395445  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:09:29.395506  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:09:29.461584  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:09:29.461610  486208 logs.go:123] Gathering logs for kube-apiserver [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5] ...
	I0819 20:09:29.461631  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:29.498852  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:09:29.498893  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:29.532151  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:09:29.532185  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:09:29.638265  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:09:29.638315  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:09:33.125406  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:09:32.152626  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:09:32.153391  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:09:32.153445  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:09:32.153506  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:09:32.190009  486208 cri.go:89] found id: "ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:32.190038  486208 cri.go:89] found id: ""
	I0819 20:09:32.190048  486208 logs.go:276] 1 containers: [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5]
	I0819 20:09:32.190108  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:32.194049  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:09:32.194121  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:09:32.229091  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:32.229122  486208 cri.go:89] found id: ""
	I0819 20:09:32.229142  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:09:32.229198  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:32.233294  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:09:32.233366  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:09:32.268822  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:32.268849  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:32.268854  486208 cri.go:89] found id: ""
	I0819 20:09:32.268865  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:09:32.268925  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:32.272921  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:32.276716  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:09:32.276794  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:09:32.311363  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:32.311397  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:32.311401  486208 cri.go:89] found id: ""
	I0819 20:09:32.311410  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:09:32.311464  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:32.315534  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:32.319419  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:09:32.319488  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:09:32.353847  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:32.353877  486208 cri.go:89] found id: ""
	I0819 20:09:32.353888  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:09:32.353951  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:32.357838  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:09:32.357914  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:09:32.397584  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:32.397621  486208 cri.go:89] found id: ""
	I0819 20:09:32.397632  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:09:32.397699  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:32.401798  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:09:32.401882  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:09:32.435184  486208 cri.go:89] found id: ""
	I0819 20:09:32.435213  486208 logs.go:276] 0 containers: []
	W0819 20:09:32.435223  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:09:32.435229  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:09:32.435291  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:09:32.470531  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:32.470561  486208 cri.go:89] found id: ""
	I0819 20:09:32.470569  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:09:32.470630  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:32.474498  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:09:32.474534  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:32.509102  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:09:32.509159  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:09:32.576179  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:09:32.576209  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:09:32.576226  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:32.619981  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:09:32.620017  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:32.656241  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:09:32.656280  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:32.691971  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:09:32.692008  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:32.726388  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:09:32.726465  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:09:32.831210  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:09:32.831256  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:32.866239  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:09:32.866276  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:09:32.904549  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:09:32.904586  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:32.974589  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:09:32.974630  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:09:33.297688  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:09:33.297741  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:33.332488  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:09:33.332522  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:09:33.346176  486208 logs.go:123] Gathering logs for kube-apiserver [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5] ...
	I0819 20:09:33.346213  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:36.197410  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:09:35.883507  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:09:35.884212  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:09:35.884269  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:09:35.884325  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:09:35.920490  486208 cri.go:89] found id: "ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:35.920515  486208 cri.go:89] found id: ""
	I0819 20:09:35.920522  486208 logs.go:276] 1 containers: [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5]
	I0819 20:09:35.920586  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:35.924669  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:09:35.924747  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:09:35.959596  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:35.959621  486208 cri.go:89] found id: ""
	I0819 20:09:35.959636  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:09:35.959691  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:35.963787  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:09:35.963874  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:09:36.001943  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:36.001972  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:36.001977  486208 cri.go:89] found id: ""
	I0819 20:09:36.001985  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:09:36.002039  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:36.006321  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:36.010174  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:09:36.010259  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:09:36.049058  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:36.049092  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:36.049098  486208 cri.go:89] found id: ""
	I0819 20:09:36.049109  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:09:36.049183  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:36.053112  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:36.056991  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:09:36.057078  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:09:36.092631  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:36.092663  486208 cri.go:89] found id: ""
	I0819 20:09:36.092674  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:09:36.092739  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:36.096756  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:09:36.096854  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:09:36.132489  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:36.132514  486208 cri.go:89] found id: ""
	I0819 20:09:36.132521  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:09:36.132582  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:36.136583  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:09:36.136652  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:09:36.170450  486208 cri.go:89] found id: ""
	I0819 20:09:36.170484  486208 logs.go:276] 0 containers: []
	W0819 20:09:36.170497  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:09:36.170505  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:09:36.170585  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:09:36.207580  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:36.207615  486208 cri.go:89] found id: ""
	I0819 20:09:36.207627  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:09:36.207694  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:36.211683  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:09:36.211710  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:36.246406  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:09:36.246444  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:36.319062  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:09:36.319111  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:09:36.358480  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:09:36.358518  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:09:36.464598  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:09:36.464645  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:09:36.479143  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:09:36.479182  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:36.511738  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:09:36.511774  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:36.547660  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:09:36.547695  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:36.583730  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:09:36.583768  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:09:36.898390  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:09:36.898436  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:36.934154  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:09:36.934186  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:36.968911  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:09:36.968950  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:09:37.039383  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:09:37.039416  486208 logs.go:123] Gathering logs for kube-apiserver [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5] ...
	I0819 20:09:37.039435  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:37.076405  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:09:37.076442  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:39.618110  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:09:39.618864  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:09:39.618926  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:09:39.618991  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:09:39.658830  486208 cri.go:89] found id: "ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:39.658862  486208 cri.go:89] found id: ""
	I0819 20:09:39.658871  486208 logs.go:276] 1 containers: [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5]
	I0819 20:09:39.658926  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:39.663246  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:09:39.663332  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:09:39.699385  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:39.699415  486208 cri.go:89] found id: ""
	I0819 20:09:39.699425  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:09:39.699476  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:39.703562  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:09:39.703637  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:09:39.738939  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:39.738963  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:39.738966  486208 cri.go:89] found id: ""
	I0819 20:09:39.738974  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:09:39.739027  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:39.743130  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:39.747317  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:09:39.747450  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:09:39.783545  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:39.783578  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:39.783582  486208 cri.go:89] found id: ""
	I0819 20:09:39.783589  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:09:39.783642  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:39.787646  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:39.791475  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:09:39.791559  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:09:39.826365  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:39.826392  486208 cri.go:89] found id: ""
	I0819 20:09:39.826400  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:09:39.826454  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:39.831195  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:09:39.831266  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:09:39.870455  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:39.870485  486208 cri.go:89] found id: ""
	I0819 20:09:39.870495  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:09:39.870566  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:39.875128  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:09:39.875208  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:09:39.910003  486208 cri.go:89] found id: ""
	I0819 20:09:39.910033  486208 logs.go:276] 0 containers: []
	W0819 20:09:39.910044  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:09:39.910053  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:09:39.910121  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:09:39.945606  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:39.945630  486208 cri.go:89] found id: ""
	I0819 20:09:39.945638  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:09:39.945691  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:39.949810  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:09:39.949842  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:39.995518  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:09:39.995553  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:40.072626  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:09:40.072670  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:40.111630  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:09:40.111664  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:09:40.441793  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:09:40.441847  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:09:40.546472  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:09:40.546519  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:09:40.560505  486208 logs.go:123] Gathering logs for kube-apiserver [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5] ...
	I0819 20:09:40.560544  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:42.277460  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:09:40.597812  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:09:40.597852  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:09:40.668268  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:09:40.668304  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:09:40.668322  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:40.701892  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:09:40.701930  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:40.739376  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:09:40.739415  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:40.772822  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:09:40.772855  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:09:40.812050  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:09:40.812089  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:40.846167  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:09:40.846207  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:43.381342  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:09:43.382094  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:09:43.382150  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:09:43.382208  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:09:43.416096  486208 cri.go:89] found id: "ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:43.416127  486208 cri.go:89] found id: ""
	I0819 20:09:43.416142  486208 logs.go:276] 1 containers: [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5]
	I0819 20:09:43.416212  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:43.421846  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:09:43.421913  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:09:43.460240  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:43.460264  486208 cri.go:89] found id: ""
	I0819 20:09:43.460272  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:09:43.460324  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:43.464589  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:09:43.464660  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:09:43.497948  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:43.497974  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:43.497977  486208 cri.go:89] found id: ""
	I0819 20:09:43.497985  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:09:43.498045  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:43.501971  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:43.506169  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:09:43.506240  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:09:43.543832  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:43.543859  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:43.543863  486208 cri.go:89] found id: ""
	I0819 20:09:43.543871  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:09:43.543925  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:43.548155  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:43.552093  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:09:43.552172  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:09:43.588214  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:43.588254  486208 cri.go:89] found id: ""
	I0819 20:09:43.588266  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:09:43.588324  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:43.592354  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:09:43.592432  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:09:43.628762  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:43.628791  486208 cri.go:89] found id: ""
	I0819 20:09:43.628800  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:09:43.628883  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:43.633751  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:09:43.633834  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:09:43.669746  486208 cri.go:89] found id: ""
	I0819 20:09:43.669774  486208 logs.go:276] 0 containers: []
	W0819 20:09:43.669784  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:09:43.669790  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:09:43.669852  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:09:43.708482  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:43.708507  486208 cri.go:89] found id: ""
	I0819 20:09:43.708515  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:09:43.708572  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:43.712850  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:09:43.712880  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:09:43.778240  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:09:43.778267  486208 logs.go:123] Gathering logs for kube-apiserver [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5] ...
	I0819 20:09:43.778284  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:43.821356  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:09:43.821390  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:43.860397  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:09:43.860427  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:43.898105  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:09:43.898139  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:09:44.207671  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:09:44.207718  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:44.252103  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:09:44.252145  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:44.286721  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:09:44.286752  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:44.320302  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:09:44.320338  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:09:44.426439  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:09:44.426480  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:44.500140  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:09:44.500188  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:44.537030  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:09:44.537073  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:09:44.551622  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:09:44.551655  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:44.588963  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:09:44.589009  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:09:45.349522  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:09:47.134360  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:09:47.135099  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:09:47.135162  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:09:47.135216  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:09:47.172254  486208 cri.go:89] found id: "ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:47.172285  486208 cri.go:89] found id: ""
	I0819 20:09:47.172294  486208 logs.go:276] 1 containers: [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5]
	I0819 20:09:47.172352  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:47.176418  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:09:47.176490  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:09:47.212201  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:47.212224  486208 cri.go:89] found id: ""
	I0819 20:09:47.212232  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:09:47.212287  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:47.216377  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:09:47.216451  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:09:47.251993  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:47.252018  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:47.252022  486208 cri.go:89] found id: ""
	I0819 20:09:47.252033  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:09:47.252084  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:47.256046  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:47.260178  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:09:47.260260  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:09:47.300435  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:47.300466  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:47.300472  486208 cri.go:89] found id: ""
	I0819 20:09:47.300483  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:09:47.300553  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:47.304913  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:47.308971  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:09:47.309055  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:09:47.349207  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:47.349242  486208 cri.go:89] found id: ""
	I0819 20:09:47.349253  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:09:47.349320  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:47.353325  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:09:47.353412  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:09:47.387718  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:47.387743  486208 cri.go:89] found id: ""
	I0819 20:09:47.387751  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:09:47.387804  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:47.391939  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:09:47.392010  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:09:47.434500  486208 cri.go:89] found id: ""
	I0819 20:09:47.434532  486208 logs.go:276] 0 containers: []
	W0819 20:09:47.434541  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:09:47.434549  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:09:47.434622  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:09:47.471367  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:47.471396  486208 cri.go:89] found id: ""
	I0819 20:09:47.471405  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:09:47.471463  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:47.475494  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:09:47.475530  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:47.515772  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:09:47.515812  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:47.555969  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:09:47.556006  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:47.589768  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:09:47.589826  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:09:47.654234  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:09:47.654260  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:09:47.654276  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:47.728877  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:09:47.728920  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:47.764376  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:09:47.764413  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:09:48.087286  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:09:48.087341  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:09:48.101146  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:09:48.101182  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:48.135530  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:09:48.135575  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:09:48.174193  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:09:48.174226  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:09:48.281214  486208 logs.go:123] Gathering logs for kube-apiserver [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5] ...
	I0819 20:09:48.281257  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:48.320374  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:09:48.320407  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:48.355678  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:09:48.355710  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:51.429446  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:09:54.501457  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:09:50.889638  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:09:50.890365  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:09:50.890432  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:09:50.890498  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:09:50.927837  486208 cri.go:89] found id: "ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:50.927871  486208 cri.go:89] found id: ""
	I0819 20:09:50.927881  486208 logs.go:276] 1 containers: [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5]
	I0819 20:09:50.927945  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:50.931904  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:09:50.931986  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:09:50.968169  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:50.968205  486208 cri.go:89] found id: ""
	I0819 20:09:50.968217  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:09:50.968285  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:50.972261  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:09:50.972333  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:09:51.011159  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:51.011187  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:51.011192  486208 cri.go:89] found id: ""
	I0819 20:09:51.011200  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:09:51.011256  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:51.015459  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:51.019281  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:09:51.019348  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:09:51.054272  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:51.054302  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:51.054306  486208 cri.go:89] found id: ""
	I0819 20:09:51.054314  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:09:51.054366  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:51.058622  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:51.062622  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:09:51.062693  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:09:51.097630  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:51.097655  486208 cri.go:89] found id: ""
	I0819 20:09:51.097663  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:09:51.097713  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:51.101530  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:09:51.101601  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:09:51.136042  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:51.136071  486208 cri.go:89] found id: ""
	I0819 20:09:51.136079  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:09:51.136133  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:51.140296  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:09:51.140364  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:09:51.174307  486208 cri.go:89] found id: ""
	I0819 20:09:51.174337  486208 logs.go:276] 0 containers: []
	W0819 20:09:51.174347  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:09:51.174353  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:09:51.174413  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:09:51.211230  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:51.211254  486208 cri.go:89] found id: ""
	I0819 20:09:51.211262  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:09:51.211329  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:51.215285  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:09:51.215310  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:51.249432  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:09:51.249464  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:09:51.561281  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:09:51.561327  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:51.600554  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:09:51.600592  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:51.678015  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:09:51.678059  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:51.712301  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:09:51.712339  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:09:51.762305  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:09:51.762340  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:09:51.876150  486208 logs.go:123] Gathering logs for kube-apiserver [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5] ...
	I0819 20:09:51.876194  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:51.915391  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:09:51.915430  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:51.948939  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:09:51.948978  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:51.983155  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:09:51.983190  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:09:51.997167  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:09:51.997204  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:09:52.063441  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:09:52.063474  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:09:52.063499  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:52.097779  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:09:52.097813  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:54.637427  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:09:54.638290  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:09:54.638360  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:09:54.638424  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:09:54.677599  486208 cri.go:89] found id: "ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:54.677621  486208 cri.go:89] found id: ""
	I0819 20:09:54.677629  486208 logs.go:276] 1 containers: [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5]
	I0819 20:09:54.677698  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:54.682428  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:09:54.682504  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:09:54.718135  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:54.718159  486208 cri.go:89] found id: ""
	I0819 20:09:54.718167  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:09:54.718222  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:54.722440  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:09:54.722515  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:09:54.757919  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:54.757943  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:54.757947  486208 cri.go:89] found id: ""
	I0819 20:09:54.757956  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:09:54.758015  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:54.761915  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:54.765638  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:09:54.765703  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:09:54.798933  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:54.798960  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:54.798969  486208 cri.go:89] found id: ""
	I0819 20:09:54.798976  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:09:54.799030  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:54.802883  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:54.807026  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:09:54.807098  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:09:54.841492  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:54.841513  486208 cri.go:89] found id: ""
	I0819 20:09:54.841521  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:09:54.841574  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:54.845579  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:09:54.845665  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:09:54.880601  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:54.880633  486208 cri.go:89] found id: ""
	I0819 20:09:54.880644  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:09:54.880698  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:54.884726  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:09:54.884811  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:09:54.921206  486208 cri.go:89] found id: ""
	I0819 20:09:54.921235  486208 logs.go:276] 0 containers: []
	W0819 20:09:54.921246  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:09:54.921255  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:09:54.921318  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:09:54.957330  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:54.957356  486208 cri.go:89] found id: ""
	I0819 20:09:54.957364  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:09:54.957420  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:54.962059  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:09:54.962090  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:55.003506  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:09:55.003546  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:55.082969  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:09:55.083012  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:09:55.122776  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:09:55.122816  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:09:55.196670  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:09:55.196698  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:09:55.196712  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:55.231988  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:09:55.232024  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:55.265733  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:09:55.265786  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:55.300173  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:09:55.300212  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:55.335305  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:09:55.335338  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:09:55.669431  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:09:55.669481  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:09:55.786493  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:09:55.786548  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:09:55.801748  486208 logs.go:123] Gathering logs for kube-apiserver [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5] ...
	I0819 20:09:55.801785  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:55.841571  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:09:55.841602  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:55.876067  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:09:55.876101  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:58.410863  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:09:58.411630  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:09:58.411697  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:09:58.411749  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:09:58.446661  486208 cri.go:89] found id: "ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:58.446698  486208 cri.go:89] found id: ""
	I0819 20:09:58.446709  486208 logs.go:276] 1 containers: [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5]
	I0819 20:09:58.446764  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:58.451022  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:09:58.451092  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:09:58.486614  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:58.486641  486208 cri.go:89] found id: ""
	I0819 20:09:58.486652  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:09:58.486710  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:58.490749  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:09:58.490816  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:09:58.525918  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:58.525943  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:58.525947  486208 cri.go:89] found id: ""
	I0819 20:09:58.525954  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:09:58.526007  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:58.530093  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:58.533914  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:09:58.533997  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:09:58.568579  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:58.568614  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:58.568618  486208 cri.go:89] found id: ""
	I0819 20:09:58.568626  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:09:58.568681  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:58.572717  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:58.576575  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:09:58.576662  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:09:58.608984  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:58.609013  486208 cri.go:89] found id: ""
	I0819 20:09:58.609024  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:09:58.609091  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:58.615197  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:09:58.615276  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:09:58.648850  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:58.648880  486208 cri.go:89] found id: ""
	I0819 20:09:58.648890  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:09:58.648949  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:58.653008  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:09:58.653100  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:09:58.687173  486208 cri.go:89] found id: ""
	I0819 20:09:58.687205  486208 logs.go:276] 0 containers: []
	W0819 20:09:58.687217  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:09:58.687225  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:09:58.687302  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:09:58.722696  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:58.722722  486208 cri.go:89] found id: ""
	I0819 20:09:58.722729  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:09:58.722785  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:09:58.726871  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:09:58.726906  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:09:58.743461  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:09:58.743504  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:09:58.787303  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:09:58.787334  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:09:58.823186  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:09:58.823218  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:09:58.941841  486208 logs.go:123] Gathering logs for kube-apiserver [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5] ...
	I0819 20:09:58.941883  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:09:58.979364  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:09:58.979401  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:09:59.020521  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:09:59.020558  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:09:59.056208  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:09:59.056243  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:09:59.093006  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:09:59.093038  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:09:59.156034  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:09:59.156067  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:09:59.156083  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:09:59.231532  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:09:59.231577  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:09:59.267653  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:09:59.267688  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:09:59.308177  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:09:59.308210  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:09:59.342568  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:09:59.342605  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:10:00.581379  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:10:03.653424  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:10:02.161683  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:10:02.162399  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:10:02.162474  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:10:02.162527  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:10:02.197369  486208 cri.go:89] found id: "ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:10:02.197399  486208 cri.go:89] found id: ""
	I0819 20:10:02.197409  486208 logs.go:276] 1 containers: [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5]
	I0819 20:10:02.197469  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:10:02.201419  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:10:02.201514  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:10:02.242302  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:10:02.242331  486208 cri.go:89] found id: ""
	I0819 20:10:02.242341  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:10:02.242406  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:10:02.246510  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:10:02.246608  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:10:02.281121  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:10:02.281158  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:10:02.281162  486208 cri.go:89] found id: ""
	I0819 20:10:02.281170  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:10:02.281223  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:10:02.285382  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:10:02.289527  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:10:02.289599  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:10:02.324309  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:10:02.324338  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:10:02.324342  486208 cri.go:89] found id: ""
	I0819 20:10:02.324350  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:10:02.324405  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:10:02.328777  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:10:02.332982  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:10:02.333059  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:10:02.367727  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:10:02.367765  486208 cri.go:89] found id: ""
	I0819 20:10:02.367776  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:10:02.367846  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:10:02.372010  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:10:02.372084  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:10:02.406379  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:10:02.406419  486208 cri.go:89] found id: ""
	I0819 20:10:02.406429  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:10:02.406498  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:10:02.410772  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:10:02.410865  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:10:02.455074  486208 cri.go:89] found id: ""
	I0819 20:10:02.455103  486208 logs.go:276] 0 containers: []
	W0819 20:10:02.455114  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:10:02.455122  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:10:02.455190  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:10:02.490421  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:10:02.490456  486208 cri.go:89] found id: ""
	I0819 20:10:02.490467  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:10:02.490532  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:10:02.494409  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:10:02.494446  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:10:02.537949  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:10:02.537991  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:10:02.577047  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:10:02.577090  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:10:02.612704  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:10:02.612749  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:10:02.650083  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:10:02.650128  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:10:02.689966  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:10:02.690005  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:10:02.704446  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:10:02.704502  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:10:02.770590  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:10:02.770627  486208 logs.go:123] Gathering logs for kube-apiserver [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5] ...
	I0819 20:10:02.770646  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:10:02.806094  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:10:02.806135  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:10:02.839518  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:10:02.839555  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:10:02.920891  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:10:02.920932  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:10:02.957480  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:10:02.957517  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:10:03.074874  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:10:03.074924  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:10:03.111150  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:10:03.111190  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:10:09.733432  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:10:05.932844  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:10:05.933640  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:10:05.933702  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:10:05.933763  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:10:05.972235  486208 cri.go:89] found id: "ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:10:05.972261  486208 cri.go:89] found id: ""
	I0819 20:10:05.972269  486208 logs.go:276] 1 containers: [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5]
	I0819 20:10:05.972319  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:10:05.976769  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:10:05.976841  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:10:06.016101  486208 cri.go:89] found id: "c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:10:06.016130  486208 cri.go:89] found id: ""
	I0819 20:10:06.016140  486208 logs.go:276] 1 containers: [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5]
	I0819 20:10:06.016199  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:10:06.020400  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:10:06.020486  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:10:06.063410  486208 cri.go:89] found id: "cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:10:06.063436  486208 cri.go:89] found id: "ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:10:06.063442  486208 cri.go:89] found id: ""
	I0819 20:10:06.063451  486208 logs.go:276] 2 containers: [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1]
	I0819 20:10:06.063517  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:10:06.067537  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:10:06.071436  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:10:06.071513  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:10:06.110513  486208 cri.go:89] found id: "2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:10:06.110543  486208 cri.go:89] found id: "30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:10:06.110547  486208 cri.go:89] found id: ""
	I0819 20:10:06.110554  486208 logs.go:276] 2 containers: [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf]
	I0819 20:10:06.110613  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:10:06.114945  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:10:06.118943  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:10:06.119008  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:10:06.158246  486208 cri.go:89] found id: "0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:10:06.158279  486208 cri.go:89] found id: ""
	I0819 20:10:06.158288  486208 logs.go:276] 1 containers: [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4]
	I0819 20:10:06.158339  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:10:06.162749  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:10:06.162830  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:10:06.206758  486208 cri.go:89] found id: "dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:10:06.206794  486208 cri.go:89] found id: ""
	I0819 20:10:06.206803  486208 logs.go:276] 1 containers: [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74]
	I0819 20:10:06.206868  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:10:06.211230  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:10:06.211319  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:10:06.252108  486208 cri.go:89] found id: ""
	I0819 20:10:06.252137  486208 logs.go:276] 0 containers: []
	W0819 20:10:06.252145  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:10:06.252152  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:10:06.252208  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:10:06.291689  486208 cri.go:89] found id: "356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:10:06.291715  486208 cri.go:89] found id: ""
	I0819 20:10:06.291724  486208 logs.go:276] 1 containers: [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f]
	I0819 20:10:06.291781  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:10:06.296513  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:10:06.296549  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:10:06.418199  486208 logs.go:123] Gathering logs for coredns [cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc] ...
	I0819 20:10:06.418244  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb942e7dda186c56224bf14f4ee85acb304c265eef337ff85cb25d9cc47215bc"
	I0819 20:10:06.456795  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:10:06.456835  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:10:06.533551  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:10:06.533575  486208 logs.go:123] Gathering logs for coredns [ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1] ...
	I0819 20:10:06.533589  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef4422e79856a147ebb36d3903be5f0de34528be9b1c202382f01a953bdbb2d1"
	I0819 20:10:06.570842  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:10:06.570877  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:10:06.616925  486208 logs.go:123] Gathering logs for kube-apiserver [ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5] ...
	I0819 20:10:06.616964  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff0609931599c3c21d79a4a9ab13aff581338b6dc498dc5a444f296005d5f8a5"
	I0819 20:10:06.660672  486208 logs.go:123] Gathering logs for etcd [c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5] ...
	I0819 20:10:06.660713  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6905e4f1b04c042f9063b13d14fab0013cfc977ee2d01a05980f02e9f0acec5"
	I0819 20:10:06.703426  486208 logs.go:123] Gathering logs for kube-scheduler [2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660] ...
	I0819 20:10:06.703466  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e50cda5cc85a1ee70ec66c838d900900b13ad373ca701bbda62f91ab8f4b660"
	I0819 20:10:06.785025  486208 logs.go:123] Gathering logs for kube-controller-manager [dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74] ...
	I0819 20:10:06.785065  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda6ea86ba1662d7d14c4624bcf3473203887386860c64edb79a3aa0c0539b74"
	I0819 20:10:06.819826  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:10:06.819853  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:10:06.833074  486208 logs.go:123] Gathering logs for kube-scheduler [30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf] ...
	I0819 20:10:06.833101  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ea9f420f304653bac74ab47b3fd168a707a46a7663ea1fafc0103a7c31cfbf"
	I0819 20:10:06.873070  486208 logs.go:123] Gathering logs for kube-proxy [0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4] ...
	I0819 20:10:06.873104  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db097a01d0b5a3aff4416d88d7d7084872afa575bb7d7d5d8592c469689fba4"
	I0819 20:10:06.907236  486208 logs.go:123] Gathering logs for storage-provisioner [356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f] ...
	I0819 20:10:06.907275  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 356a2f463f5ac9ab6c77a63de7b023417651ef025bace712787a64affd0d8e3f"
	I0819 20:10:06.941392  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:10:06.941420  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:10:09.764848  486208 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8443/healthz ...
	I0819 20:10:09.765580  486208 api_server.go:269] stopped: https://192.168.50.10:8443/healthz: Get "https://192.168.50.10:8443/healthz": dial tcp 192.168.50.10:8443: connect: connection refused
	I0819 20:10:09.765654  486208 kubeadm.go:597] duration metric: took 4m2.802300286s to restartPrimaryControlPlane
	W0819 20:10:09.765725  486208 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 20:10:09.765757  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 20:10:10.770324  486208 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.004537017s)
	I0819 20:10:10.770422  486208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:10:10.786292  486208 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 20:10:10.796679  486208 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 20:10:10.806822  486208 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 20:10:10.806843  486208 kubeadm.go:157] found existing configuration files:
	
	I0819 20:10:10.806896  486208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 20:10:10.816532  486208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 20:10:10.816618  486208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 20:10:10.826648  486208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 20:10:10.836497  486208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 20:10:10.836571  486208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 20:10:10.846508  486208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 20:10:10.855866  486208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 20:10:10.855930  486208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 20:10:10.865606  486208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 20:10:10.874941  486208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 20:10:10.875006  486208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 20:10:10.884779  486208 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 20:10:10.930625  486208 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 20:10:10.930759  486208 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 20:10:11.025618  486208 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 20:10:11.025769  486208 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 20:10:11.025943  486208 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 20:10:11.033268  486208 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 20:10:11.035065  486208 out.go:235]   - Generating certificates and keys ...
	I0819 20:10:11.035202  486208 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 20:10:11.035298  486208 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 20:10:11.035447  486208 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 20:10:11.035574  486208 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 20:10:11.035680  486208 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 20:10:11.035755  486208 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 20:10:11.035839  486208 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 20:10:11.035929  486208 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 20:10:11.036031  486208 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 20:10:11.036144  486208 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 20:10:11.036196  486208 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 20:10:11.036274  486208 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 20:10:11.153189  486208 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 20:10:11.276726  486208 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 20:10:11.531351  486208 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 20:10:11.654205  486208 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 20:10:11.875609  486208 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 20:10:11.875994  486208 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 20:10:11.880392  486208 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 20:10:12.805500  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:10:11.882063  486208 out.go:235]   - Booting up control plane ...
	I0819 20:10:11.882177  486208 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 20:10:11.882262  486208 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 20:10:11.882327  486208 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 20:10:11.900734  486208 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 20:10:11.907078  486208 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 20:10:11.907133  486208 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 20:10:12.043386  486208 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 20:10:12.043515  486208 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 20:10:12.564021  486208 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 520.614511ms
	I0819 20:10:12.564154  486208 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 20:10:18.885451  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:10:21.957519  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:10:28.037419  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:10:31.109456  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:10:37.189446  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:10:40.261481  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:10:46.341418  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:10:49.413472  486861 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.196:22: connect: no route to host
	I0819 20:10:52.418015  487175 start.go:364] duration metric: took 4m17.441922571s to acquireMachinesLock for "embed-certs-108534"
	I0819 20:10:52.418081  487175 start.go:96] Skipping create...Using existing machine configuration
	I0819 20:10:52.418087  487175 fix.go:54] fixHost starting: 
	I0819 20:10:52.418410  487175 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:10:52.418440  487175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:10:52.433913  487175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45359
	I0819 20:10:52.434397  487175 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:10:52.434996  487175 main.go:141] libmachine: Using API Version  1
	I0819 20:10:52.435025  487175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:10:52.435356  487175 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:10:52.435603  487175 main.go:141] libmachine: (embed-certs-108534) Calling .DriverName
	I0819 20:10:52.435801  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetState
	I0819 20:10:52.437777  487175 fix.go:112] recreateIfNeeded on embed-certs-108534: state=Stopped err=<nil>
	I0819 20:10:52.437810  487175 main.go:141] libmachine: (embed-certs-108534) Calling .DriverName
	W0819 20:10:52.438007  487175 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 20:10:52.439712  487175 out.go:177] * Restarting existing kvm2 VM for "embed-certs-108534" ...
	I0819 20:10:52.415473  486861 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 20:10:52.415527  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetMachineName
	I0819 20:10:52.415911  486861 buildroot.go:166] provisioning hostname "no-preload-944514"
	I0819 20:10:52.415938  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetMachineName
	I0819 20:10:52.416198  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHHostname
	I0819 20:10:52.417848  486861 machine.go:96] duration metric: took 4m37.416883287s to provisionDockerMachine
	I0819 20:10:52.417908  486861 fix.go:56] duration metric: took 4m37.438952542s for fixHost
	I0819 20:10:52.417915  486861 start.go:83] releasing machines lock for "no-preload-944514", held for 4m37.438975436s
	W0819 20:10:52.417950  486861 start.go:714] error starting host: provision: host is not running
	W0819 20:10:52.418047  486861 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 20:10:52.418058  486861 start.go:729] Will try again in 5 seconds ...
	I0819 20:10:52.440905  487175 main.go:141] libmachine: (embed-certs-108534) Calling .Start
	I0819 20:10:52.441157  487175 main.go:141] libmachine: (embed-certs-108534) Ensuring networks are active...
	I0819 20:10:52.441970  487175 main.go:141] libmachine: (embed-certs-108534) Ensuring network default is active
	I0819 20:10:52.442317  487175 main.go:141] libmachine: (embed-certs-108534) Ensuring network mk-embed-certs-108534 is active
	I0819 20:10:52.442604  487175 main.go:141] libmachine: (embed-certs-108534) Getting domain xml...
	I0819 20:10:52.443293  487175 main.go:141] libmachine: (embed-certs-108534) Creating domain...
	I0819 20:10:53.691289  487175 main.go:141] libmachine: (embed-certs-108534) Waiting to get IP...
	I0819 20:10:53.692194  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:10:53.692563  487175 main.go:141] libmachine: (embed-certs-108534) DBG | unable to find current IP address of domain embed-certs-108534 in network mk-embed-certs-108534
	I0819 20:10:53.692633  487175 main.go:141] libmachine: (embed-certs-108534) DBG | I0819 20:10:53.692552  488330 retry.go:31] will retry after 199.149158ms: waiting for machine to come up
	I0819 20:10:53.893157  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:10:53.893683  487175 main.go:141] libmachine: (embed-certs-108534) DBG | unable to find current IP address of domain embed-certs-108534 in network mk-embed-certs-108534
	I0819 20:10:53.893712  487175 main.go:141] libmachine: (embed-certs-108534) DBG | I0819 20:10:53.893633  488330 retry.go:31] will retry after 298.267094ms: waiting for machine to come up
	I0819 20:10:54.193400  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:10:54.193955  487175 main.go:141] libmachine: (embed-certs-108534) DBG | unable to find current IP address of domain embed-certs-108534 in network mk-embed-certs-108534
	I0819 20:10:54.193990  487175 main.go:141] libmachine: (embed-certs-108534) DBG | I0819 20:10:54.193917  488330 retry.go:31] will retry after 358.371061ms: waiting for machine to come up
	I0819 20:10:54.553499  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:10:54.553988  487175 main.go:141] libmachine: (embed-certs-108534) DBG | unable to find current IP address of domain embed-certs-108534 in network mk-embed-certs-108534
	I0819 20:10:54.554018  487175 main.go:141] libmachine: (embed-certs-108534) DBG | I0819 20:10:54.553941  488330 retry.go:31] will retry after 373.776551ms: waiting for machine to come up
	I0819 20:10:57.419819  486861 start.go:360] acquireMachinesLock for no-preload-944514: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 20:10:54.929701  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:10:54.930154  487175 main.go:141] libmachine: (embed-certs-108534) DBG | unable to find current IP address of domain embed-certs-108534 in network mk-embed-certs-108534
	I0819 20:10:54.930183  487175 main.go:141] libmachine: (embed-certs-108534) DBG | I0819 20:10:54.930114  488330 retry.go:31] will retry after 728.058441ms: waiting for machine to come up
	I0819 20:10:55.660257  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:10:55.660876  487175 main.go:141] libmachine: (embed-certs-108534) DBG | unable to find current IP address of domain embed-certs-108534 in network mk-embed-certs-108534
	I0819 20:10:55.660928  487175 main.go:141] libmachine: (embed-certs-108534) DBG | I0819 20:10:55.660818  488330 retry.go:31] will retry after 731.385188ms: waiting for machine to come up
	I0819 20:10:56.393744  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:10:56.394142  487175 main.go:141] libmachine: (embed-certs-108534) DBG | unable to find current IP address of domain embed-certs-108534 in network mk-embed-certs-108534
	I0819 20:10:56.394174  487175 main.go:141] libmachine: (embed-certs-108534) DBG | I0819 20:10:56.394106  488330 retry.go:31] will retry after 838.549988ms: waiting for machine to come up
	I0819 20:10:57.234163  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:10:57.234610  487175 main.go:141] libmachine: (embed-certs-108534) DBG | unable to find current IP address of domain embed-certs-108534 in network mk-embed-certs-108534
	I0819 20:10:57.234638  487175 main.go:141] libmachine: (embed-certs-108534) DBG | I0819 20:10:57.234552  488330 retry.go:31] will retry after 1.073814746s: waiting for machine to come up
	I0819 20:10:58.310430  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:10:58.310872  487175 main.go:141] libmachine: (embed-certs-108534) DBG | unable to find current IP address of domain embed-certs-108534 in network mk-embed-certs-108534
	I0819 20:10:58.310902  487175 main.go:141] libmachine: (embed-certs-108534) DBG | I0819 20:10:58.310810  488330 retry.go:31] will retry after 1.185851795s: waiting for machine to come up
	I0819 20:10:59.498180  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:10:59.498632  487175 main.go:141] libmachine: (embed-certs-108534) DBG | unable to find current IP address of domain embed-certs-108534 in network mk-embed-certs-108534
	I0819 20:10:59.498665  487175 main.go:141] libmachine: (embed-certs-108534) DBG | I0819 20:10:59.498606  488330 retry.go:31] will retry after 1.819971905s: waiting for machine to come up
	I0819 20:11:01.320404  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:01.320910  487175 main.go:141] libmachine: (embed-certs-108534) DBG | unable to find current IP address of domain embed-certs-108534 in network mk-embed-certs-108534
	I0819 20:11:01.320945  487175 main.go:141] libmachine: (embed-certs-108534) DBG | I0819 20:11:01.320863  488330 retry.go:31] will retry after 2.590147662s: waiting for machine to come up
	I0819 20:11:03.913437  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:03.913906  487175 main.go:141] libmachine: (embed-certs-108534) DBG | unable to find current IP address of domain embed-certs-108534 in network mk-embed-certs-108534
	I0819 20:11:03.913934  487175 main.go:141] libmachine: (embed-certs-108534) DBG | I0819 20:11:03.913856  488330 retry.go:31] will retry after 3.083927012s: waiting for machine to come up
	I0819 20:11:06.999942  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:07.000340  487175 main.go:141] libmachine: (embed-certs-108534) DBG | unable to find current IP address of domain embed-certs-108534 in network mk-embed-certs-108534
	I0819 20:11:07.000368  487175 main.go:141] libmachine: (embed-certs-108534) DBG | I0819 20:11:07.000315  488330 retry.go:31] will retry after 3.216509361s: waiting for machine to come up
	I0819 20:11:11.482061  487755 start.go:364] duration metric: took 2m48.416135029s to acquireMachinesLock for "old-k8s-version-968990"
	I0819 20:11:11.482131  487755 start.go:96] Skipping create...Using existing machine configuration
	I0819 20:11:11.482140  487755 fix.go:54] fixHost starting: 
	I0819 20:11:11.482527  487755 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:11:11.482560  487755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:11:11.500120  487755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36515
	I0819 20:11:11.500598  487755 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:11:11.501100  487755 main.go:141] libmachine: Using API Version  1
	I0819 20:11:11.501139  487755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:11:11.501470  487755 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:11:11.501684  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .DriverName
	I0819 20:11:11.501836  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetState
	I0819 20:11:11.503544  487755 fix.go:112] recreateIfNeeded on old-k8s-version-968990: state=Stopped err=<nil>
	I0819 20:11:11.503573  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .DriverName
	W0819 20:11:11.503759  487755 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 20:11:11.506257  487755 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-968990" ...
	I0819 20:11:11.507590  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .Start
	I0819 20:11:11.507860  487755 main.go:141] libmachine: (old-k8s-version-968990) Ensuring networks are active...
	I0819 20:11:11.508758  487755 main.go:141] libmachine: (old-k8s-version-968990) Ensuring network default is active
	I0819 20:11:11.509108  487755 main.go:141] libmachine: (old-k8s-version-968990) Ensuring network mk-old-k8s-version-968990 is active
	I0819 20:11:11.509701  487755 main.go:141] libmachine: (old-k8s-version-968990) Getting domain xml...
	I0819 20:11:11.510453  487755 main.go:141] libmachine: (old-k8s-version-968990) Creating domain...
	I0819 20:11:12.808729  487755 main.go:141] libmachine: (old-k8s-version-968990) Waiting to get IP...
	I0819 20:11:12.809706  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:12.810225  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | unable to find current IP address of domain old-k8s-version-968990 in network mk-old-k8s-version-968990
	I0819 20:11:12.810309  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | I0819 20:11:12.810224  488470 retry.go:31] will retry after 307.717003ms: waiting for machine to come up
	I0819 20:11:10.220110  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.220681  487175 main.go:141] libmachine: (embed-certs-108534) Found IP for machine: 192.168.72.88
	I0819 20:11:10.220708  487175 main.go:141] libmachine: (embed-certs-108534) Reserving static IP address...
	I0819 20:11:10.220724  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has current primary IP address 192.168.72.88 and MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.221181  487175 main.go:141] libmachine: (embed-certs-108534) DBG | found host DHCP lease matching {name: "embed-certs-108534", mac: "52:54:00:60:6a:92", ip: "192.168.72.88"} in network mk-embed-certs-108534: {Iface:virbr4 ExpiryTime:2024-08-19 21:11:03 +0000 UTC Type:0 Mac:52:54:00:60:6a:92 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:embed-certs-108534 Clientid:01:52:54:00:60:6a:92}
	I0819 20:11:10.221213  487175 main.go:141] libmachine: (embed-certs-108534) Reserved static IP address: 192.168.72.88
	I0819 20:11:10.221228  487175 main.go:141] libmachine: (embed-certs-108534) DBG | skip adding static IP to network mk-embed-certs-108534 - found existing host DHCP lease matching {name: "embed-certs-108534", mac: "52:54:00:60:6a:92", ip: "192.168.72.88"}
	I0819 20:11:10.221238  487175 main.go:141] libmachine: (embed-certs-108534) DBG | Getting to WaitForSSH function...
	I0819 20:11:10.221249  487175 main.go:141] libmachine: (embed-certs-108534) Waiting for SSH to be available...
	I0819 20:11:10.223449  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.223803  487175 main.go:141] libmachine: (embed-certs-108534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:6a:92", ip: ""} in network mk-embed-certs-108534: {Iface:virbr4 ExpiryTime:2024-08-19 21:11:03 +0000 UTC Type:0 Mac:52:54:00:60:6a:92 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:embed-certs-108534 Clientid:01:52:54:00:60:6a:92}
	I0819 20:11:10.223848  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined IP address 192.168.72.88 and MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.223975  487175 main.go:141] libmachine: (embed-certs-108534) DBG | Using SSH client type: external
	I0819 20:11:10.224001  487175 main.go:141] libmachine: (embed-certs-108534) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/embed-certs-108534/id_rsa (-rw-------)
	I0819 20:11:10.224032  487175 main.go:141] libmachine: (embed-certs-108534) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.88 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-430949/.minikube/machines/embed-certs-108534/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 20:11:10.224051  487175 main.go:141] libmachine: (embed-certs-108534) DBG | About to run SSH command:
	I0819 20:11:10.224064  487175 main.go:141] libmachine: (embed-certs-108534) DBG | exit 0
	I0819 20:11:10.349238  487175 main.go:141] libmachine: (embed-certs-108534) DBG | SSH cmd err, output: <nil>: 
	I0819 20:11:10.349606  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetConfigRaw
	I0819 20:11:10.350286  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetIP
	I0819 20:11:10.353055  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.353437  487175 main.go:141] libmachine: (embed-certs-108534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:6a:92", ip: ""} in network mk-embed-certs-108534: {Iface:virbr4 ExpiryTime:2024-08-19 21:11:03 +0000 UTC Type:0 Mac:52:54:00:60:6a:92 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:embed-certs-108534 Clientid:01:52:54:00:60:6a:92}
	I0819 20:11:10.353467  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined IP address 192.168.72.88 and MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.353818  487175 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/embed-certs-108534/config.json ...
	I0819 20:11:10.354033  487175 machine.go:93] provisionDockerMachine start ...
	I0819 20:11:10.354052  487175 main.go:141] libmachine: (embed-certs-108534) Calling .DriverName
	I0819 20:11:10.354305  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHHostname
	I0819 20:11:10.356652  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.356971  487175 main.go:141] libmachine: (embed-certs-108534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:6a:92", ip: ""} in network mk-embed-certs-108534: {Iface:virbr4 ExpiryTime:2024-08-19 21:11:03 +0000 UTC Type:0 Mac:52:54:00:60:6a:92 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:embed-certs-108534 Clientid:01:52:54:00:60:6a:92}
	I0819 20:11:10.356996  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined IP address 192.168.72.88 and MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.357206  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHPort
	I0819 20:11:10.357423  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHKeyPath
	I0819 20:11:10.357643  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHKeyPath
	I0819 20:11:10.357838  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHUsername
	I0819 20:11:10.358044  487175 main.go:141] libmachine: Using SSH client type: native
	I0819 20:11:10.358304  487175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0819 20:11:10.358321  487175 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 20:11:10.457667  487175 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 20:11:10.457706  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetMachineName
	I0819 20:11:10.457988  487175 buildroot.go:166] provisioning hostname "embed-certs-108534"
	I0819 20:11:10.458021  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetMachineName
	I0819 20:11:10.458199  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHHostname
	I0819 20:11:10.460584  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.460909  487175 main.go:141] libmachine: (embed-certs-108534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:6a:92", ip: ""} in network mk-embed-certs-108534: {Iface:virbr4 ExpiryTime:2024-08-19 21:11:03 +0000 UTC Type:0 Mac:52:54:00:60:6a:92 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:embed-certs-108534 Clientid:01:52:54:00:60:6a:92}
	I0819 20:11:10.460936  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined IP address 192.168.72.88 and MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.461072  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHPort
	I0819 20:11:10.461272  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHKeyPath
	I0819 20:11:10.461472  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHKeyPath
	I0819 20:11:10.461695  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHUsername
	I0819 20:11:10.461890  487175 main.go:141] libmachine: Using SSH client type: native
	I0819 20:11:10.462068  487175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0819 20:11:10.462081  487175 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-108534 && echo "embed-certs-108534" | sudo tee /etc/hostname
	I0819 20:11:10.575586  487175 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-108534
	
	I0819 20:11:10.575622  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHHostname
	I0819 20:11:10.578598  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.578934  487175 main.go:141] libmachine: (embed-certs-108534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:6a:92", ip: ""} in network mk-embed-certs-108534: {Iface:virbr4 ExpiryTime:2024-08-19 21:11:03 +0000 UTC Type:0 Mac:52:54:00:60:6a:92 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:embed-certs-108534 Clientid:01:52:54:00:60:6a:92}
	I0819 20:11:10.578973  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined IP address 192.168.72.88 and MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.579120  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHPort
	I0819 20:11:10.579326  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHKeyPath
	I0819 20:11:10.579505  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHKeyPath
	I0819 20:11:10.579649  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHUsername
	I0819 20:11:10.579822  487175 main.go:141] libmachine: Using SSH client type: native
	I0819 20:11:10.580032  487175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0819 20:11:10.580053  487175 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-108534' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-108534/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-108534' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 20:11:10.685358  487175 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 20:11:10.685391  487175 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 20:11:10.685410  487175 buildroot.go:174] setting up certificates
	I0819 20:11:10.685419  487175 provision.go:84] configureAuth start
	I0819 20:11:10.685429  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetMachineName
	I0819 20:11:10.685755  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetIP
	I0819 20:11:10.688461  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.688861  487175 main.go:141] libmachine: (embed-certs-108534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:6a:92", ip: ""} in network mk-embed-certs-108534: {Iface:virbr4 ExpiryTime:2024-08-19 21:11:03 +0000 UTC Type:0 Mac:52:54:00:60:6a:92 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:embed-certs-108534 Clientid:01:52:54:00:60:6a:92}
	I0819 20:11:10.688886  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined IP address 192.168.72.88 and MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.689078  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHHostname
	I0819 20:11:10.691293  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.691634  487175 main.go:141] libmachine: (embed-certs-108534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:6a:92", ip: ""} in network mk-embed-certs-108534: {Iface:virbr4 ExpiryTime:2024-08-19 21:11:03 +0000 UTC Type:0 Mac:52:54:00:60:6a:92 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:embed-certs-108534 Clientid:01:52:54:00:60:6a:92}
	I0819 20:11:10.691670  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined IP address 192.168.72.88 and MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.691789  487175 provision.go:143] copyHostCerts
	I0819 20:11:10.691881  487175 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 20:11:10.691894  487175 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 20:11:10.691989  487175 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 20:11:10.692108  487175 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 20:11:10.692121  487175 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 20:11:10.692157  487175 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 20:11:10.692229  487175 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 20:11:10.692238  487175 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 20:11:10.692271  487175 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 20:11:10.692338  487175 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.embed-certs-108534 san=[127.0.0.1 192.168.72.88 embed-certs-108534 localhost minikube]
	I0819 20:11:10.832094  487175 provision.go:177] copyRemoteCerts
	I0819 20:11:10.832161  487175 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 20:11:10.832200  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHHostname
	I0819 20:11:10.835239  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.835617  487175 main.go:141] libmachine: (embed-certs-108534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:6a:92", ip: ""} in network mk-embed-certs-108534: {Iface:virbr4 ExpiryTime:2024-08-19 21:11:03 +0000 UTC Type:0 Mac:52:54:00:60:6a:92 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:embed-certs-108534 Clientid:01:52:54:00:60:6a:92}
	I0819 20:11:10.835649  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined IP address 192.168.72.88 and MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.835835  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHPort
	I0819 20:11:10.836054  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHKeyPath
	I0819 20:11:10.836197  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHUsername
	I0819 20:11:10.836306  487175 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/embed-certs-108534/id_rsa Username:docker}
	I0819 20:11:10.914849  487175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 20:11:10.939741  487175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 20:11:10.964897  487175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 20:11:10.989569  487175 provision.go:87] duration metric: took 304.127898ms to configureAuth
	I0819 20:11:10.989603  487175 buildroot.go:189] setting minikube options for container-runtime
	I0819 20:11:10.989801  487175 config.go:182] Loaded profile config "embed-certs-108534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:11:10.989882  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHHostname
	I0819 20:11:10.992812  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.993221  487175 main.go:141] libmachine: (embed-certs-108534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:6a:92", ip: ""} in network mk-embed-certs-108534: {Iface:virbr4 ExpiryTime:2024-08-19 21:11:03 +0000 UTC Type:0 Mac:52:54:00:60:6a:92 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:embed-certs-108534 Clientid:01:52:54:00:60:6a:92}
	I0819 20:11:10.993266  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined IP address 192.168.72.88 and MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:10.993438  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHPort
	I0819 20:11:10.993682  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHKeyPath
	I0819 20:11:10.993851  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHKeyPath
	I0819 20:11:10.994024  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHUsername
	I0819 20:11:10.994176  487175 main.go:141] libmachine: Using SSH client type: native
	I0819 20:11:10.994363  487175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0819 20:11:10.994378  487175 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 20:11:11.254368  487175 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 20:11:11.254395  487175 machine.go:96] duration metric: took 900.348967ms to provisionDockerMachine
	I0819 20:11:11.254407  487175 start.go:293] postStartSetup for "embed-certs-108534" (driver="kvm2")
	I0819 20:11:11.254422  487175 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 20:11:11.254466  487175 main.go:141] libmachine: (embed-certs-108534) Calling .DriverName
	I0819 20:11:11.254875  487175 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 20:11:11.254915  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHHostname
	I0819 20:11:11.257896  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:11.258260  487175 main.go:141] libmachine: (embed-certs-108534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:6a:92", ip: ""} in network mk-embed-certs-108534: {Iface:virbr4 ExpiryTime:2024-08-19 21:11:03 +0000 UTC Type:0 Mac:52:54:00:60:6a:92 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:embed-certs-108534 Clientid:01:52:54:00:60:6a:92}
	I0819 20:11:11.258312  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined IP address 192.168.72.88 and MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:11.258412  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHPort
	I0819 20:11:11.258629  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHKeyPath
	I0819 20:11:11.258774  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHUsername
	I0819 20:11:11.258897  487175 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/embed-certs-108534/id_rsa Username:docker}
	I0819 20:11:11.340032  487175 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 20:11:11.344442  487175 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 20:11:11.344488  487175 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 20:11:11.344584  487175 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 20:11:11.344688  487175 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 20:11:11.344815  487175 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 20:11:11.354628  487175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 20:11:11.378564  487175 start.go:296] duration metric: took 124.141334ms for postStartSetup
	I0819 20:11:11.378619  487175 fix.go:56] duration metric: took 18.960531315s for fixHost
	I0819 20:11:11.378643  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHHostname
	I0819 20:11:11.381686  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:11.381986  487175 main.go:141] libmachine: (embed-certs-108534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:6a:92", ip: ""} in network mk-embed-certs-108534: {Iface:virbr4 ExpiryTime:2024-08-19 21:11:03 +0000 UTC Type:0 Mac:52:54:00:60:6a:92 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:embed-certs-108534 Clientid:01:52:54:00:60:6a:92}
	I0819 20:11:11.382037  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined IP address 192.168.72.88 and MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:11.382194  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHPort
	I0819 20:11:11.382435  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHKeyPath
	I0819 20:11:11.382616  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHKeyPath
	I0819 20:11:11.382760  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHUsername
	I0819 20:11:11.382952  487175 main.go:141] libmachine: Using SSH client type: native
	I0819 20:11:11.383139  487175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0819 20:11:11.383150  487175 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 20:11:11.481885  487175 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724098271.456501293
	
	I0819 20:11:11.481910  487175 fix.go:216] guest clock: 1724098271.456501293
	I0819 20:11:11.481925  487175 fix.go:229] Guest: 2024-08-19 20:11:11.456501293 +0000 UTC Remote: 2024-08-19 20:11:11.378623534 +0000 UTC m=+276.549214718 (delta=77.877759ms)
	I0819 20:11:11.481955  487175 fix.go:200] guest clock delta is within tolerance: 77.877759ms
	I0819 20:11:11.481962  487175 start.go:83] releasing machines lock for "embed-certs-108534", held for 19.063903297s
	I0819 20:11:11.481996  487175 main.go:141] libmachine: (embed-certs-108534) Calling .DriverName
	I0819 20:11:11.482274  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetIP
	I0819 20:11:11.485358  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:11.485762  487175 main.go:141] libmachine: (embed-certs-108534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:6a:92", ip: ""} in network mk-embed-certs-108534: {Iface:virbr4 ExpiryTime:2024-08-19 21:11:03 +0000 UTC Type:0 Mac:52:54:00:60:6a:92 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:embed-certs-108534 Clientid:01:52:54:00:60:6a:92}
	I0819 20:11:11.485801  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined IP address 192.168.72.88 and MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:11.486048  487175 main.go:141] libmachine: (embed-certs-108534) Calling .DriverName
	I0819 20:11:11.486594  487175 main.go:141] libmachine: (embed-certs-108534) Calling .DriverName
	I0819 20:11:11.486812  487175 main.go:141] libmachine: (embed-certs-108534) Calling .DriverName
	I0819 20:11:11.486936  487175 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 20:11:11.487009  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHHostname
	I0819 20:11:11.487028  487175 ssh_runner.go:195] Run: cat /version.json
	I0819 20:11:11.487052  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHHostname
	I0819 20:11:11.489823  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:11.490062  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:11.490161  487175 main.go:141] libmachine: (embed-certs-108534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:6a:92", ip: ""} in network mk-embed-certs-108534: {Iface:virbr4 ExpiryTime:2024-08-19 21:11:03 +0000 UTC Type:0 Mac:52:54:00:60:6a:92 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:embed-certs-108534 Clientid:01:52:54:00:60:6a:92}
	I0819 20:11:11.490200  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined IP address 192.168.72.88 and MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:11.490375  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHPort
	I0819 20:11:11.490478  487175 main.go:141] libmachine: (embed-certs-108534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:6a:92", ip: ""} in network mk-embed-certs-108534: {Iface:virbr4 ExpiryTime:2024-08-19 21:11:03 +0000 UTC Type:0 Mac:52:54:00:60:6a:92 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:embed-certs-108534 Clientid:01:52:54:00:60:6a:92}
	I0819 20:11:11.490531  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined IP address 192.168.72.88 and MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:11.490736  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHPort
	I0819 20:11:11.490754  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHKeyPath
	I0819 20:11:11.490934  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHKeyPath
	I0819 20:11:11.490959  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHUsername
	I0819 20:11:11.491082  487175 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/embed-certs-108534/id_rsa Username:docker}
	I0819 20:11:11.491147  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHUsername
	I0819 20:11:11.491289  487175 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/embed-certs-108534/id_rsa Username:docker}
	I0819 20:11:11.602267  487175 ssh_runner.go:195] Run: systemctl --version
	I0819 20:11:11.608496  487175 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 20:11:11.751012  487175 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 20:11:11.756738  487175 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 20:11:11.756837  487175 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 20:11:11.773221  487175 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 20:11:11.773254  487175 start.go:495] detecting cgroup driver to use...
	I0819 20:11:11.773335  487175 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 20:11:11.796735  487175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 20:11:11.812521  487175 docker.go:217] disabling cri-docker service (if available) ...
	I0819 20:11:11.812593  487175 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 20:11:11.826923  487175 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 20:11:11.841651  487175 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 20:11:11.956329  487175 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 20:11:12.131500  487175 docker.go:233] disabling docker service ...
	I0819 20:11:12.131572  487175 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 20:11:12.145953  487175 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 20:11:12.163045  487175 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 20:11:12.287134  487175 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 20:11:12.419248  487175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 20:11:12.434973  487175 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 20:11:12.453917  487175 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 20:11:12.453978  487175 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:11:12.464815  487175 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 20:11:12.464891  487175 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:11:12.475442  487175 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:11:12.486025  487175 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:11:12.497927  487175 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 20:11:12.509473  487175 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:11:12.521101  487175 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:11:12.540063  487175 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:11:12.553360  487175 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 20:11:12.563198  487175 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 20:11:12.563272  487175 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 20:11:12.577392  487175 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 20:11:12.587311  487175 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:11:12.711224  487175 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 20:11:12.852675  487175 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 20:11:12.852756  487175 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 20:11:12.858220  487175 start.go:563] Will wait 60s for crictl version
	I0819 20:11:12.858286  487175 ssh_runner.go:195] Run: which crictl
	I0819 20:11:12.861942  487175 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 20:11:12.897602  487175 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 20:11:12.897692  487175 ssh_runner.go:195] Run: crio --version
	I0819 20:11:12.925819  487175 ssh_runner.go:195] Run: crio --version
	I0819 20:11:12.963090  487175 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 20:11:12.964358  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetIP
	I0819 20:11:12.967613  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:12.968023  487175 main.go:141] libmachine: (embed-certs-108534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:6a:92", ip: ""} in network mk-embed-certs-108534: {Iface:virbr4 ExpiryTime:2024-08-19 21:11:03 +0000 UTC Type:0 Mac:52:54:00:60:6a:92 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:embed-certs-108534 Clientid:01:52:54:00:60:6a:92}
	I0819 20:11:12.968055  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined IP address 192.168.72.88 and MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:11:12.968281  487175 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 20:11:12.972558  487175 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 20:11:12.985734  487175 kubeadm.go:883] updating cluster {Name:embed-certs-108534 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-108534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 20:11:12.985916  487175 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 20:11:12.985967  487175 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 20:11:13.023098  487175 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 20:11:13.023210  487175 ssh_runner.go:195] Run: which lz4
	I0819 20:11:13.027591  487175 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 20:11:13.032220  487175 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 20:11:13.032268  487175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 20:11:14.423896  487175 crio.go:462] duration metric: took 1.396353853s to copy over tarball
	I0819 20:11:14.423973  487175 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 20:11:13.120118  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:13.120679  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | unable to find current IP address of domain old-k8s-version-968990 in network mk-old-k8s-version-968990
	I0819 20:11:13.120711  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | I0819 20:11:13.120625  488470 retry.go:31] will retry after 312.732422ms: waiting for machine to come up
	I0819 20:11:13.435370  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:13.435923  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | unable to find current IP address of domain old-k8s-version-968990 in network mk-old-k8s-version-968990
	I0819 20:11:13.435953  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | I0819 20:11:13.435877  488470 retry.go:31] will retry after 298.254798ms: waiting for machine to come up
	I0819 20:11:13.735531  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:13.736121  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | unable to find current IP address of domain old-k8s-version-968990 in network mk-old-k8s-version-968990
	I0819 20:11:13.736148  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | I0819 20:11:13.736063  488470 retry.go:31] will retry after 366.47196ms: waiting for machine to come up
	I0819 20:11:14.104909  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:14.105467  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | unable to find current IP address of domain old-k8s-version-968990 in network mk-old-k8s-version-968990
	I0819 20:11:14.105501  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | I0819 20:11:14.105416  488470 retry.go:31] will retry after 687.449356ms: waiting for machine to come up
	I0819 20:11:14.794304  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:14.794853  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | unable to find current IP address of domain old-k8s-version-968990 in network mk-old-k8s-version-968990
	I0819 20:11:14.794880  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | I0819 20:11:14.794806  488470 retry.go:31] will retry after 697.684847ms: waiting for machine to come up
	I0819 20:11:15.494256  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:15.494785  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | unable to find current IP address of domain old-k8s-version-968990 in network mk-old-k8s-version-968990
	I0819 20:11:15.494808  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | I0819 20:11:15.494727  488470 retry.go:31] will retry after 736.119178ms: waiting for machine to come up
	I0819 20:11:16.232723  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:16.233162  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | unable to find current IP address of domain old-k8s-version-968990 in network mk-old-k8s-version-968990
	I0819 20:11:16.233209  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | I0819 20:11:16.233120  488470 retry.go:31] will retry after 998.535401ms: waiting for machine to come up
	I0819 20:11:17.234033  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:17.234777  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | unable to find current IP address of domain old-k8s-version-968990 in network mk-old-k8s-version-968990
	I0819 20:11:17.234810  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | I0819 20:11:17.234711  488470 retry.go:31] will retry after 1.521290166s: waiting for machine to come up
	I0819 20:11:16.749012  487175 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.32500209s)
	I0819 20:11:16.749055  487175 crio.go:469] duration metric: took 2.32512369s to extract the tarball
	I0819 20:11:16.749066  487175 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 20:11:16.786777  487175 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 20:11:16.830833  487175 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 20:11:16.830863  487175 cache_images.go:84] Images are preloaded, skipping loading
	I0819 20:11:16.830874  487175 kubeadm.go:934] updating node { 192.168.72.88 8443 v1.31.0 crio true true} ...
	I0819 20:11:16.831023  487175 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-108534 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-108534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 20:11:16.831119  487175 ssh_runner.go:195] Run: crio config
	I0819 20:11:16.877626  487175 cni.go:84] Creating CNI manager for ""
	I0819 20:11:16.877651  487175 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 20:11:16.877664  487175 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 20:11:16.877688  487175 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.88 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-108534 NodeName:embed-certs-108534 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 20:11:16.877861  487175 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-108534"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 20:11:16.877925  487175 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 20:11:16.888008  487175 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 20:11:16.888086  487175 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 20:11:16.897760  487175 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 20:11:16.915845  487175 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 20:11:16.932780  487175 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0819 20:11:16.950857  487175 ssh_runner.go:195] Run: grep 192.168.72.88	control-plane.minikube.internal$ /etc/hosts
	I0819 20:11:16.955047  487175 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 20:11:16.967476  487175 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:11:17.078533  487175 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 20:11:17.094294  487175 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/embed-certs-108534 for IP: 192.168.72.88
	I0819 20:11:17.094325  487175 certs.go:194] generating shared ca certs ...
	I0819 20:11:17.094347  487175 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:11:17.094554  487175 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 20:11:17.094615  487175 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 20:11:17.094632  487175 certs.go:256] generating profile certs ...
	I0819 20:11:17.094781  487175 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/embed-certs-108534/client.key
	I0819 20:11:17.094870  487175 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/embed-certs-108534/apiserver.key.7c69ddad
	I0819 20:11:17.094925  487175 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/embed-certs-108534/proxy-client.key
	I0819 20:11:17.095092  487175 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 20:11:17.095141  487175 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 20:11:17.095153  487175 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 20:11:17.095184  487175 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 20:11:17.095220  487175 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 20:11:17.095258  487175 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 20:11:17.095310  487175 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 20:11:17.096234  487175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 20:11:17.154471  487175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 20:11:17.187853  487175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 20:11:17.218272  487175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 20:11:17.251177  487175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/embed-certs-108534/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 20:11:17.278149  487175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/embed-certs-108534/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 20:11:17.302948  487175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/embed-certs-108534/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 20:11:17.327461  487175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/embed-certs-108534/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 20:11:17.352876  487175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 20:11:17.376882  487175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 20:11:17.402343  487175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 20:11:17.426824  487175 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 20:11:17.444794  487175 ssh_runner.go:195] Run: openssl version
	I0819 20:11:17.450679  487175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 20:11:17.462273  487175 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:11:17.467181  487175 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:11:17.467246  487175 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:11:17.472961  487175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 20:11:17.484428  487175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 20:11:17.495895  487175 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 20:11:17.500398  487175 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 20:11:17.500460  487175 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 20:11:17.506329  487175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 20:11:17.517811  487175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 20:11:17.529386  487175 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 20:11:17.534105  487175 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 20:11:17.534168  487175 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 20:11:17.540042  487175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 20:11:17.551506  487175 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 20:11:17.556271  487175 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 20:11:17.562462  487175 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 20:11:17.568605  487175 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 20:11:17.575156  487175 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 20:11:17.581533  487175 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 20:11:17.587743  487175 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 20:11:17.594047  487175 kubeadm.go:392] StartCluster: {Name:embed-certs-108534 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-108534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:11:17.594149  487175 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 20:11:17.594201  487175 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 20:11:17.631943  487175 cri.go:89] found id: ""
	I0819 20:11:17.632066  487175 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 20:11:17.644034  487175 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 20:11:17.644056  487175 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 20:11:17.644100  487175 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 20:11:17.655566  487175 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 20:11:17.656926  487175 kubeconfig.go:125] found "embed-certs-108534" server: "https://192.168.72.88:8443"
	I0819 20:11:17.659854  487175 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 20:11:17.671696  487175 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.88
	I0819 20:11:17.671735  487175 kubeadm.go:1160] stopping kube-system containers ...
	I0819 20:11:17.671755  487175 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 20:11:17.671819  487175 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 20:11:17.712298  487175 cri.go:89] found id: ""
	I0819 20:11:17.712371  487175 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 20:11:17.729650  487175 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 20:11:17.739921  487175 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 20:11:17.739951  487175 kubeadm.go:157] found existing configuration files:
	
	I0819 20:11:17.740015  487175 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 20:11:17.750196  487175 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 20:11:17.750287  487175 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 20:11:17.760393  487175 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 20:11:17.770242  487175 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 20:11:17.770335  487175 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 20:11:17.780693  487175 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 20:11:17.790325  487175 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 20:11:17.790402  487175 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 20:11:17.800679  487175 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 20:11:17.810325  487175 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 20:11:17.810407  487175 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 20:11:17.820762  487175 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 20:11:17.834279  487175 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 20:11:17.946312  487175 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 20:11:18.988084  487175 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.041689796s)
	I0819 20:11:18.988124  487175 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 20:11:19.181448  487175 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 20:11:19.251952  487175 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 20:11:19.347338  487175 api_server.go:52] waiting for apiserver process to appear ...
	I0819 20:11:19.347478  487175 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:19.848401  487175 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:18.758333  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:18.758798  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | unable to find current IP address of domain old-k8s-version-968990 in network mk-old-k8s-version-968990
	I0819 20:11:18.758828  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | I0819 20:11:18.758728  488470 retry.go:31] will retry after 1.586047171s: waiting for machine to come up
	I0819 20:11:20.346637  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:20.347064  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | unable to find current IP address of domain old-k8s-version-968990 in network mk-old-k8s-version-968990
	I0819 20:11:20.347087  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | I0819 20:11:20.347026  488470 retry.go:31] will retry after 2.715629806s: waiting for machine to come up
	I0819 20:11:20.348430  487175 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:20.364922  487175 api_server.go:72] duration metric: took 1.017591272s to wait for apiserver process to appear ...
	I0819 20:11:20.364959  487175 api_server.go:88] waiting for apiserver healthz status ...
	I0819 20:11:20.364985  487175 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0819 20:11:22.729271  487175 api_server.go:279] https://192.168.72.88:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 20:11:22.729392  487175 api_server.go:103] status: https://192.168.72.88:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 20:11:22.729427  487175 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0819 20:11:22.751679  487175 api_server.go:279] https://192.168.72.88:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 20:11:22.751793  487175 api_server.go:103] status: https://192.168.72.88:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 20:11:22.865974  487175 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0819 20:11:22.870649  487175 api_server.go:279] https://192.168.72.88:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:11:22.870687  487175 api_server.go:103] status: https://192.168.72.88:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:11:23.365217  487175 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0819 20:11:23.383625  487175 api_server.go:279] https://192.168.72.88:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:11:23.383674  487175 api_server.go:103] status: https://192.168.72.88:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:11:23.865256  487175 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0819 20:11:23.877516  487175 api_server.go:279] https://192.168.72.88:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:11:23.877557  487175 api_server.go:103] status: https://192.168.72.88:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:11:24.365101  487175 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0819 20:11:24.369719  487175 api_server.go:279] https://192.168.72.88:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:11:24.369754  487175 api_server.go:103] status: https://192.168.72.88:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:11:24.865620  487175 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0819 20:11:24.872504  487175 api_server.go:279] https://192.168.72.88:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:11:24.872540  487175 api_server.go:103] status: https://192.168.72.88:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:11:25.365896  487175 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0819 20:11:25.373550  487175 api_server.go:279] https://192.168.72.88:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:11:25.373582  487175 api_server.go:103] status: https://192.168.72.88:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:11:25.865166  487175 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0819 20:11:25.870344  487175 api_server.go:279] https://192.168.72.88:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:11:25.870374  487175 api_server.go:103] status: https://192.168.72.88:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:11:26.366074  487175 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0819 20:11:26.370574  487175 api_server.go:279] https://192.168.72.88:8443/healthz returned 200:
	ok
	I0819 20:11:26.376683  487175 api_server.go:141] control plane version: v1.31.0
	I0819 20:11:26.376713  487175 api_server.go:131] duration metric: took 6.011747121s to wait for apiserver health ...
	I0819 20:11:26.376725  487175 cni.go:84] Creating CNI manager for ""
	I0819 20:11:26.376736  487175 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 20:11:26.378653  487175 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 20:11:23.064212  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:23.064728  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | unable to find current IP address of domain old-k8s-version-968990 in network mk-old-k8s-version-968990
	I0819 20:11:23.064783  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | I0819 20:11:23.064683  488470 retry.go:31] will retry after 2.412753463s: waiting for machine to come up
	I0819 20:11:25.479657  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:25.480101  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | unable to find current IP address of domain old-k8s-version-968990 in network mk-old-k8s-version-968990
	I0819 20:11:25.480138  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | I0819 20:11:25.480053  488470 retry.go:31] will retry after 4.222766121s: waiting for machine to come up
	I0819 20:11:26.380015  487175 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 20:11:26.391237  487175 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 20:11:26.409990  487175 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 20:11:26.419067  487175 system_pods.go:59] 8 kube-system pods found
	I0819 20:11:26.419111  487175 system_pods.go:61] "coredns-6f6b679f8f-nnx2r" [0211b5d3-beba-4c8f-89c6-feda67f21b62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 20:11:26.419118  487175 system_pods.go:61] "etcd-embed-certs-108534" [a0304c54-396b-4993-8668-de5b1a1fd987] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 20:11:26.419126  487175 system_pods.go:61] "kube-apiserver-embed-certs-108534" [b16366d7-8a3a-415a-83a6-035ddf8b4511] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 20:11:26.419137  487175 system_pods.go:61] "kube-controller-manager-embed-certs-108534" [a1d187df-cfc8-4d74-92d9-3b170a6cb8ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 20:11:26.419142  487175 system_pods.go:61] "kube-proxy-2fqh4" [016e7bbe-c49e-4d15-8628-9757eacb5263] Running
	I0819 20:11:26.419146  487175 system_pods.go:61] "kube-scheduler-embed-certs-108534" [c58dcae6-3762-45e6-af66-120806670476] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 20:11:26.419155  487175 system_pods.go:61] "metrics-server-6867b74b74-9shzw" [9e1166ba-e0d7-4c89-ae08-01c81604ceff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 20:11:26.419164  487175 system_pods.go:61] "storage-provisioner" [ae276247-eb8e-41f8-9c60-84517581c342] Running
	I0819 20:11:26.419173  487175 system_pods.go:74] duration metric: took 9.16203ms to wait for pod list to return data ...
	I0819 20:11:26.419180  487175 node_conditions.go:102] verifying NodePressure condition ...
	I0819 20:11:26.423590  487175 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 20:11:26.423624  487175 node_conditions.go:123] node cpu capacity is 2
	I0819 20:11:26.423638  487175 node_conditions.go:105] duration metric: took 4.453432ms to run NodePressure ...
	I0819 20:11:26.423656  487175 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 20:11:26.685272  487175 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 20:11:26.690513  487175 kubeadm.go:739] kubelet initialised
	I0819 20:11:26.690539  487175 kubeadm.go:740] duration metric: took 5.240705ms waiting for restarted kubelet to initialise ...
	I0819 20:11:26.690551  487175 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 20:11:26.697546  487175 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-nnx2r" in "kube-system" namespace to be "Ready" ...
	I0819 20:11:26.704115  487175 pod_ready.go:98] node "embed-certs-108534" hosting pod "coredns-6f6b679f8f-nnx2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-108534" has status "Ready":"False"
	I0819 20:11:26.704145  487175 pod_ready.go:82] duration metric: took 6.562504ms for pod "coredns-6f6b679f8f-nnx2r" in "kube-system" namespace to be "Ready" ...
	E0819 20:11:26.704155  487175 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-108534" hosting pod "coredns-6f6b679f8f-nnx2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-108534" has status "Ready":"False"
	I0819 20:11:26.704163  487175 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-108534" in "kube-system" namespace to be "Ready" ...
	I0819 20:11:26.712645  487175 pod_ready.go:98] node "embed-certs-108534" hosting pod "etcd-embed-certs-108534" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-108534" has status "Ready":"False"
	I0819 20:11:26.712679  487175 pod_ready.go:82] duration metric: took 8.508164ms for pod "etcd-embed-certs-108534" in "kube-system" namespace to be "Ready" ...
	E0819 20:11:26.712689  487175 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-108534" hosting pod "etcd-embed-certs-108534" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-108534" has status "Ready":"False"
	I0819 20:11:26.712707  487175 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-108534" in "kube-system" namespace to be "Ready" ...
	I0819 20:11:26.718560  487175 pod_ready.go:98] node "embed-certs-108534" hosting pod "kube-apiserver-embed-certs-108534" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-108534" has status "Ready":"False"
	I0819 20:11:26.718586  487175 pod_ready.go:82] duration metric: took 5.871727ms for pod "kube-apiserver-embed-certs-108534" in "kube-system" namespace to be "Ready" ...
	E0819 20:11:26.718596  487175 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-108534" hosting pod "kube-apiserver-embed-certs-108534" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-108534" has status "Ready":"False"
	I0819 20:11:26.718604  487175 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-108534" in "kube-system" namespace to be "Ready" ...
	I0819 20:11:26.814856  487175 pod_ready.go:98] node "embed-certs-108534" hosting pod "kube-controller-manager-embed-certs-108534" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-108534" has status "Ready":"False"
	I0819 20:11:26.814886  487175 pod_ready.go:82] duration metric: took 96.274167ms for pod "kube-controller-manager-embed-certs-108534" in "kube-system" namespace to be "Ready" ...
	E0819 20:11:26.814896  487175 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-108534" hosting pod "kube-controller-manager-embed-certs-108534" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-108534" has status "Ready":"False"
	I0819 20:11:26.814907  487175 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-2fqh4" in "kube-system" namespace to be "Ready" ...
	I0819 20:11:27.213062  487175 pod_ready.go:93] pod "kube-proxy-2fqh4" in "kube-system" namespace has status "Ready":"True"
	I0819 20:11:27.213090  487175 pod_ready.go:82] duration metric: took 398.174058ms for pod "kube-proxy-2fqh4" in "kube-system" namespace to be "Ready" ...
	I0819 20:11:27.213100  487175 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-108534" in "kube-system" namespace to be "Ready" ...
	I0819 20:11:29.219749  487175 pod_ready.go:103] pod "kube-scheduler-embed-certs-108534" in "kube-system" namespace has status "Ready":"False"
	I0819 20:11:30.946008  486861 start.go:364] duration metric: took 33.52611795s to acquireMachinesLock for "no-preload-944514"
	I0819 20:11:30.946058  486861 start.go:96] Skipping create...Using existing machine configuration
	I0819 20:11:30.946064  486861 fix.go:54] fixHost starting: 
	I0819 20:11:30.946453  486861 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:11:30.946480  486861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:11:30.963255  486861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33991
	I0819 20:11:30.963751  486861 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:11:30.964248  486861 main.go:141] libmachine: Using API Version  1
	I0819 20:11:30.964269  486861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:11:30.964642  486861 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:11:30.964849  486861 main.go:141] libmachine: (no-preload-944514) Calling .DriverName
	I0819 20:11:30.965061  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetState
	I0819 20:11:30.966720  486861 fix.go:112] recreateIfNeeded on no-preload-944514: state=Stopped err=<nil>
	I0819 20:11:30.966752  486861 main.go:141] libmachine: (no-preload-944514) Calling .DriverName
	W0819 20:11:30.966922  486861 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 20:11:30.968993  486861 out.go:177] * Restarting existing kvm2 VM for "no-preload-944514" ...
	I0819 20:11:29.706278  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:29.706747  487755 main.go:141] libmachine: (old-k8s-version-968990) Found IP for machine: 192.168.39.213
	I0819 20:11:29.706772  487755 main.go:141] libmachine: (old-k8s-version-968990) Reserving static IP address...
	I0819 20:11:29.706807  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has current primary IP address 192.168.39.213 and MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:29.707217  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | found host DHCP lease matching {name: "old-k8s-version-968990", mac: "52:54:00:54:09:7a", ip: "192.168.39.213"} in network mk-old-k8s-version-968990: {Iface:virbr3 ExpiryTime:2024-08-19 21:11:22 +0000 UTC Type:0 Mac:52:54:00:54:09:7a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:old-k8s-version-968990 Clientid:01:52:54:00:54:09:7a}
	I0819 20:11:29.707245  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | skip adding static IP to network mk-old-k8s-version-968990 - found existing host DHCP lease matching {name: "old-k8s-version-968990", mac: "52:54:00:54:09:7a", ip: "192.168.39.213"}
	I0819 20:11:29.707261  487755 main.go:141] libmachine: (old-k8s-version-968990) Reserved static IP address: 192.168.39.213
	I0819 20:11:29.707276  487755 main.go:141] libmachine: (old-k8s-version-968990) Waiting for SSH to be available...
	I0819 20:11:29.707290  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | Getting to WaitForSSH function...
	I0819 20:11:29.709623  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:29.709976  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:09:7a", ip: ""} in network mk-old-k8s-version-968990: {Iface:virbr3 ExpiryTime:2024-08-19 21:11:22 +0000 UTC Type:0 Mac:52:54:00:54:09:7a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:old-k8s-version-968990 Clientid:01:52:54:00:54:09:7a}
	I0819 20:11:29.710014  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined IP address 192.168.39.213 and MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:29.710113  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | Using SSH client type: external
	I0819 20:11:29.710149  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/old-k8s-version-968990/id_rsa (-rw-------)
	I0819 20:11:29.710177  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-430949/.minikube/machines/old-k8s-version-968990/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 20:11:29.710192  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | About to run SSH command:
	I0819 20:11:29.710201  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | exit 0
	I0819 20:11:29.837344  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | SSH cmd err, output: <nil>: 
	I0819 20:11:29.837708  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetConfigRaw
	I0819 20:11:29.838396  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetIP
	I0819 20:11:29.841022  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:29.841402  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:09:7a", ip: ""} in network mk-old-k8s-version-968990: {Iface:virbr3 ExpiryTime:2024-08-19 21:11:22 +0000 UTC Type:0 Mac:52:54:00:54:09:7a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:old-k8s-version-968990 Clientid:01:52:54:00:54:09:7a}
	I0819 20:11:29.841437  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined IP address 192.168.39.213 and MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:29.841696  487755 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/old-k8s-version-968990/config.json ...
	I0819 20:11:29.841924  487755 machine.go:93] provisionDockerMachine start ...
	I0819 20:11:29.841944  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .DriverName
	I0819 20:11:29.842179  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHHostname
	I0819 20:11:29.844665  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:29.845025  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:09:7a", ip: ""} in network mk-old-k8s-version-968990: {Iface:virbr3 ExpiryTime:2024-08-19 21:11:22 +0000 UTC Type:0 Mac:52:54:00:54:09:7a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:old-k8s-version-968990 Clientid:01:52:54:00:54:09:7a}
	I0819 20:11:29.845048  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined IP address 192.168.39.213 and MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:29.845182  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHPort
	I0819 20:11:29.845386  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHKeyPath
	I0819 20:11:29.845583  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHKeyPath
	I0819 20:11:29.845721  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHUsername
	I0819 20:11:29.845895  487755 main.go:141] libmachine: Using SSH client type: native
	I0819 20:11:29.846150  487755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0819 20:11:29.846162  487755 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 20:11:29.945594  487755 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 20:11:29.945629  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetMachineName
	I0819 20:11:29.945896  487755 buildroot.go:166] provisioning hostname "old-k8s-version-968990"
	I0819 20:11:29.945928  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetMachineName
	I0819 20:11:29.946132  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHHostname
	I0819 20:11:29.948780  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:29.949268  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:09:7a", ip: ""} in network mk-old-k8s-version-968990: {Iface:virbr3 ExpiryTime:2024-08-19 21:11:22 +0000 UTC Type:0 Mac:52:54:00:54:09:7a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:old-k8s-version-968990 Clientid:01:52:54:00:54:09:7a}
	I0819 20:11:29.949314  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined IP address 192.168.39.213 and MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:29.949510  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHPort
	I0819 20:11:29.949731  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHKeyPath
	I0819 20:11:29.949905  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHKeyPath
	I0819 20:11:29.950008  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHUsername
	I0819 20:11:29.950163  487755 main.go:141] libmachine: Using SSH client type: native
	I0819 20:11:29.950388  487755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0819 20:11:29.950403  487755 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-968990 && echo "old-k8s-version-968990" | sudo tee /etc/hostname
	I0819 20:11:30.063135  487755 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-968990
	
	I0819 20:11:30.063175  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHHostname
	I0819 20:11:30.066352  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.066723  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:09:7a", ip: ""} in network mk-old-k8s-version-968990: {Iface:virbr3 ExpiryTime:2024-08-19 21:11:22 +0000 UTC Type:0 Mac:52:54:00:54:09:7a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:old-k8s-version-968990 Clientid:01:52:54:00:54:09:7a}
	I0819 20:11:30.066762  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined IP address 192.168.39.213 and MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.066914  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHPort
	I0819 20:11:30.067148  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHKeyPath
	I0819 20:11:30.067329  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHKeyPath
	I0819 20:11:30.067471  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHUsername
	I0819 20:11:30.067631  487755 main.go:141] libmachine: Using SSH client type: native
	I0819 20:11:30.067872  487755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0819 20:11:30.067892  487755 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-968990' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-968990/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-968990' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 20:11:30.174048  487755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 20:11:30.174078  487755 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 20:11:30.174102  487755 buildroot.go:174] setting up certificates
	I0819 20:11:30.174113  487755 provision.go:84] configureAuth start
	I0819 20:11:30.174122  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetMachineName
	I0819 20:11:30.174421  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetIP
	I0819 20:11:30.177094  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.177537  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:09:7a", ip: ""} in network mk-old-k8s-version-968990: {Iface:virbr3 ExpiryTime:2024-08-19 21:11:22 +0000 UTC Type:0 Mac:52:54:00:54:09:7a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:old-k8s-version-968990 Clientid:01:52:54:00:54:09:7a}
	I0819 20:11:30.177573  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined IP address 192.168.39.213 and MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.177788  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHHostname
	I0819 20:11:30.179951  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.180292  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:09:7a", ip: ""} in network mk-old-k8s-version-968990: {Iface:virbr3 ExpiryTime:2024-08-19 21:11:22 +0000 UTC Type:0 Mac:52:54:00:54:09:7a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:old-k8s-version-968990 Clientid:01:52:54:00:54:09:7a}
	I0819 20:11:30.180319  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined IP address 192.168.39.213 and MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.180432  487755 provision.go:143] copyHostCerts
	I0819 20:11:30.180494  487755 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 20:11:30.180504  487755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 20:11:30.180564  487755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 20:11:30.180654  487755 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 20:11:30.180662  487755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 20:11:30.180681  487755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 20:11:30.180733  487755 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 20:11:30.180741  487755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 20:11:30.180759  487755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 20:11:30.180810  487755 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-968990 san=[127.0.0.1 192.168.39.213 localhost minikube old-k8s-version-968990]
	I0819 20:11:30.305443  487755 provision.go:177] copyRemoteCerts
	I0819 20:11:30.305523  487755 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 20:11:30.305570  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHHostname
	I0819 20:11:30.308784  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.309205  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:09:7a", ip: ""} in network mk-old-k8s-version-968990: {Iface:virbr3 ExpiryTime:2024-08-19 21:11:22 +0000 UTC Type:0 Mac:52:54:00:54:09:7a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:old-k8s-version-968990 Clientid:01:52:54:00:54:09:7a}
	I0819 20:11:30.309248  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined IP address 192.168.39.213 and MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.309411  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHPort
	I0819 20:11:30.309638  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHKeyPath
	I0819 20:11:30.309817  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHUsername
	I0819 20:11:30.309961  487755 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/old-k8s-version-968990/id_rsa Username:docker}
	I0819 20:11:30.387345  487755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 20:11:30.411580  487755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 20:11:30.435886  487755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 20:11:30.459745  487755 provision.go:87] duration metric: took 285.618847ms to configureAuth
	I0819 20:11:30.459777  487755 buildroot.go:189] setting minikube options for container-runtime
	I0819 20:11:30.459989  487755 config.go:182] Loaded profile config "old-k8s-version-968990": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 20:11:30.460077  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHHostname
	I0819 20:11:30.462896  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.463298  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:09:7a", ip: ""} in network mk-old-k8s-version-968990: {Iface:virbr3 ExpiryTime:2024-08-19 21:11:22 +0000 UTC Type:0 Mac:52:54:00:54:09:7a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:old-k8s-version-968990 Clientid:01:52:54:00:54:09:7a}
	I0819 20:11:30.463326  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined IP address 192.168.39.213 and MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.463496  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHPort
	I0819 20:11:30.463696  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHKeyPath
	I0819 20:11:30.463853  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHKeyPath
	I0819 20:11:30.463979  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHUsername
	I0819 20:11:30.464133  487755 main.go:141] libmachine: Using SSH client type: native
	I0819 20:11:30.464343  487755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0819 20:11:30.464359  487755 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 20:11:30.717579  487755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 20:11:30.717603  487755 machine.go:96] duration metric: took 875.664199ms to provisionDockerMachine
	I0819 20:11:30.717616  487755 start.go:293] postStartSetup for "old-k8s-version-968990" (driver="kvm2")
	I0819 20:11:30.717626  487755 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 20:11:30.717651  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .DriverName
	I0819 20:11:30.717995  487755 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 20:11:30.718017  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHHostname
	I0819 20:11:30.721260  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.721576  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:09:7a", ip: ""} in network mk-old-k8s-version-968990: {Iface:virbr3 ExpiryTime:2024-08-19 21:11:22 +0000 UTC Type:0 Mac:52:54:00:54:09:7a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:old-k8s-version-968990 Clientid:01:52:54:00:54:09:7a}
	I0819 20:11:30.721602  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined IP address 192.168.39.213 and MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.721802  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHPort
	I0819 20:11:30.722032  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHKeyPath
	I0819 20:11:30.722291  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHUsername
	I0819 20:11:30.722457  487755 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/old-k8s-version-968990/id_rsa Username:docker}
	I0819 20:11:30.799914  487755 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 20:11:30.804380  487755 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 20:11:30.804413  487755 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 20:11:30.804498  487755 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 20:11:30.804595  487755 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 20:11:30.804693  487755 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 20:11:30.814780  487755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 20:11:30.841783  487755 start.go:296] duration metric: took 124.151161ms for postStartSetup
	I0819 20:11:30.841837  487755 fix.go:56] duration metric: took 19.359696661s for fixHost
	I0819 20:11:30.841866  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHHostname
	I0819 20:11:30.844941  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.845341  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:09:7a", ip: ""} in network mk-old-k8s-version-968990: {Iface:virbr3 ExpiryTime:2024-08-19 21:11:22 +0000 UTC Type:0 Mac:52:54:00:54:09:7a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:old-k8s-version-968990 Clientid:01:52:54:00:54:09:7a}
	I0819 20:11:30.845375  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined IP address 192.168.39.213 and MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.845553  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHPort
	I0819 20:11:30.845779  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHKeyPath
	I0819 20:11:30.845957  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHKeyPath
	I0819 20:11:30.846127  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHUsername
	I0819 20:11:30.846264  487755 main.go:141] libmachine: Using SSH client type: native
	I0819 20:11:30.846528  487755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0819 20:11:30.846542  487755 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 20:11:30.945825  487755 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724098290.920173573
	
	I0819 20:11:30.945853  487755 fix.go:216] guest clock: 1724098290.920173573
	I0819 20:11:30.945862  487755 fix.go:229] Guest: 2024-08-19 20:11:30.920173573 +0000 UTC Remote: 2024-08-19 20:11:30.841843376 +0000 UTC m=+187.925920761 (delta=78.330197ms)
	I0819 20:11:30.945906  487755 fix.go:200] guest clock delta is within tolerance: 78.330197ms
	I0819 20:11:30.945917  487755 start.go:83] releasing machines lock for "old-k8s-version-968990", held for 19.463815132s
	I0819 20:11:30.945946  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .DriverName
	I0819 20:11:30.946234  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetIP
	I0819 20:11:30.948969  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.949351  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:09:7a", ip: ""} in network mk-old-k8s-version-968990: {Iface:virbr3 ExpiryTime:2024-08-19 21:11:22 +0000 UTC Type:0 Mac:52:54:00:54:09:7a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:old-k8s-version-968990 Clientid:01:52:54:00:54:09:7a}
	I0819 20:11:30.949392  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined IP address 192.168.39.213 and MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.949539  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .DriverName
	I0819 20:11:30.950189  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .DriverName
	I0819 20:11:30.950433  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .DriverName
	I0819 20:11:30.950501  487755 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 20:11:30.950584  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHHostname
	I0819 20:11:30.950628  487755 ssh_runner.go:195] Run: cat /version.json
	I0819 20:11:30.950652  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHHostname
	I0819 20:11:30.953677  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.953916  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.954100  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:09:7a", ip: ""} in network mk-old-k8s-version-968990: {Iface:virbr3 ExpiryTime:2024-08-19 21:11:22 +0000 UTC Type:0 Mac:52:54:00:54:09:7a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:old-k8s-version-968990 Clientid:01:52:54:00:54:09:7a}
	I0819 20:11:30.954144  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined IP address 192.168.39.213 and MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.954231  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:09:7a", ip: ""} in network mk-old-k8s-version-968990: {Iface:virbr3 ExpiryTime:2024-08-19 21:11:22 +0000 UTC Type:0 Mac:52:54:00:54:09:7a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:old-k8s-version-968990 Clientid:01:52:54:00:54:09:7a}
	I0819 20:11:30.954258  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined IP address 192.168.39.213 and MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:30.954327  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHPort
	I0819 20:11:30.954473  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHPort
	I0819 20:11:30.954565  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHKeyPath
	I0819 20:11:30.954715  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHKeyPath
	I0819 20:11:30.954773  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHUsername
	I0819 20:11:30.954879  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetSSHUsername
	I0819 20:11:30.954945  487755 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/old-k8s-version-968990/id_rsa Username:docker}
	I0819 20:11:30.955032  487755 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/old-k8s-version-968990/id_rsa Username:docker}
	I0819 20:11:31.049443  487755 ssh_runner.go:195] Run: systemctl --version
	I0819 20:11:31.057069  487755 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 20:11:31.204883  487755 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 20:11:31.212892  487755 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 20:11:31.212995  487755 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 20:11:31.230594  487755 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 20:11:31.230632  487755 start.go:495] detecting cgroup driver to use...
	I0819 20:11:31.230717  487755 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 20:11:31.251197  487755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 20:11:31.266698  487755 docker.go:217] disabling cri-docker service (if available) ...
	I0819 20:11:31.266790  487755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 20:11:31.287331  487755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 20:11:31.307652  487755 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 20:11:31.445209  487755 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 20:11:31.608151  487755 docker.go:233] disabling docker service ...
	I0819 20:11:31.608230  487755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 20:11:31.626298  487755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 20:11:31.641603  487755 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 20:11:31.796235  487755 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 20:11:31.924566  487755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 20:11:31.939953  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 20:11:31.959396  487755 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 20:11:31.959477  487755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:11:31.973343  487755 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 20:11:31.973419  487755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:11:31.987976  487755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:11:31.999361  487755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:11:32.014393  487755 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 20:11:32.029154  487755 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 20:11:32.040692  487755 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 20:11:32.040747  487755 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 20:11:32.055579  487755 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 20:11:32.066301  487755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:11:32.189025  487755 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 20:11:32.340166  487755 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 20:11:32.340260  487755 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 20:11:32.345338  487755 start.go:563] Will wait 60s for crictl version
	I0819 20:11:32.345416  487755 ssh_runner.go:195] Run: which crictl
	I0819 20:11:32.349495  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 20:11:32.386842  487755 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 20:11:32.386946  487755 ssh_runner.go:195] Run: crio --version
	I0819 20:11:32.417247  487755 ssh_runner.go:195] Run: crio --version
	I0819 20:11:32.454620  487755 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 20:11:32.455822  487755 main.go:141] libmachine: (old-k8s-version-968990) Calling .GetIP
	I0819 20:11:32.459481  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:32.459947  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:09:7a", ip: ""} in network mk-old-k8s-version-968990: {Iface:virbr3 ExpiryTime:2024-08-19 21:11:22 +0000 UTC Type:0 Mac:52:54:00:54:09:7a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:old-k8s-version-968990 Clientid:01:52:54:00:54:09:7a}
	I0819 20:11:32.459986  487755 main.go:141] libmachine: (old-k8s-version-968990) DBG | domain old-k8s-version-968990 has defined IP address 192.168.39.213 and MAC address 52:54:00:54:09:7a in network mk-old-k8s-version-968990
	I0819 20:11:32.460280  487755 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 20:11:32.464878  487755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 20:11:32.477711  487755 kubeadm.go:883] updating cluster {Name:old-k8s-version-968990 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-968990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 20:11:32.477827  487755 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 20:11:32.477872  487755 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 20:11:32.530973  487755 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 20:11:32.531050  487755 ssh_runner.go:195] Run: which lz4
	I0819 20:11:32.535307  487755 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 20:11:32.539590  487755 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 20:11:32.539638  487755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 20:11:30.970404  486861 main.go:141] libmachine: (no-preload-944514) Calling .Start
	I0819 20:11:30.970690  486861 main.go:141] libmachine: (no-preload-944514) Ensuring networks are active...
	I0819 20:11:30.971578  486861 main.go:141] libmachine: (no-preload-944514) Ensuring network default is active
	I0819 20:11:30.972062  486861 main.go:141] libmachine: (no-preload-944514) Ensuring network mk-no-preload-944514 is active
	I0819 20:11:30.972477  486861 main.go:141] libmachine: (no-preload-944514) Getting domain xml...
	I0819 20:11:30.973384  486861 main.go:141] libmachine: (no-preload-944514) Creating domain...
	I0819 20:11:32.393949  486861 main.go:141] libmachine: (no-preload-944514) Waiting to get IP...
	I0819 20:11:32.395080  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:32.395642  486861 main.go:141] libmachine: (no-preload-944514) DBG | unable to find current IP address of domain no-preload-944514 in network mk-no-preload-944514
	I0819 20:11:32.395731  486861 main.go:141] libmachine: (no-preload-944514) DBG | I0819 20:11:32.395605  488606 retry.go:31] will retry after 225.361514ms: waiting for machine to come up
	I0819 20:11:32.623281  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:32.623817  486861 main.go:141] libmachine: (no-preload-944514) DBG | unable to find current IP address of domain no-preload-944514 in network mk-no-preload-944514
	I0819 20:11:32.623844  486861 main.go:141] libmachine: (no-preload-944514) DBG | I0819 20:11:32.623771  488606 retry.go:31] will retry after 280.119947ms: waiting for machine to come up
	I0819 20:11:32.905716  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:32.906466  486861 main.go:141] libmachine: (no-preload-944514) DBG | unable to find current IP address of domain no-preload-944514 in network mk-no-preload-944514
	I0819 20:11:32.906497  486861 main.go:141] libmachine: (no-preload-944514) DBG | I0819 20:11:32.906415  488606 retry.go:31] will retry after 473.269046ms: waiting for machine to come up
	I0819 20:11:33.381226  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:33.381748  486861 main.go:141] libmachine: (no-preload-944514) DBG | unable to find current IP address of domain no-preload-944514 in network mk-no-preload-944514
	I0819 20:11:33.381778  486861 main.go:141] libmachine: (no-preload-944514) DBG | I0819 20:11:33.381721  488606 retry.go:31] will retry after 415.856173ms: waiting for machine to come up
	I0819 20:11:33.799468  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:33.800131  486861 main.go:141] libmachine: (no-preload-944514) DBG | unable to find current IP address of domain no-preload-944514 in network mk-no-preload-944514
	I0819 20:11:33.800164  486861 main.go:141] libmachine: (no-preload-944514) DBG | I0819 20:11:33.800075  488606 retry.go:31] will retry after 616.633181ms: waiting for machine to come up
	I0819 20:11:34.418732  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:34.419233  486861 main.go:141] libmachine: (no-preload-944514) DBG | unable to find current IP address of domain no-preload-944514 in network mk-no-preload-944514
	I0819 20:11:34.419259  486861 main.go:141] libmachine: (no-preload-944514) DBG | I0819 20:11:34.419153  488606 retry.go:31] will retry after 792.226996ms: waiting for machine to come up
	I0819 20:11:31.220750  487175 pod_ready.go:103] pod "kube-scheduler-embed-certs-108534" in "kube-system" namespace has status "Ready":"False"
	I0819 20:11:31.722240  487175 pod_ready.go:93] pod "kube-scheduler-embed-certs-108534" in "kube-system" namespace has status "Ready":"True"
	I0819 20:11:31.722277  487175 pod_ready.go:82] duration metric: took 4.50916932s for pod "kube-scheduler-embed-certs-108534" in "kube-system" namespace to be "Ready" ...
	I0819 20:11:31.722290  487175 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace to be "Ready" ...
	I0819 20:11:33.731377  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:11:34.187099  487755 crio.go:462] duration metric: took 1.65183581s to copy over tarball
	I0819 20:11:34.187200  487755 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 20:11:37.273645  487755 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.086407837s)
	I0819 20:11:37.273681  487755 crio.go:469] duration metric: took 3.086544782s to extract the tarball
	I0819 20:11:37.273692  487755 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 20:11:37.316071  487755 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 20:11:37.351372  487755 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 20:11:37.351411  487755 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 20:11:37.351490  487755 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 20:11:37.351527  487755 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 20:11:37.351551  487755 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 20:11:37.351560  487755 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 20:11:37.351493  487755 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 20:11:37.351538  487755 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 20:11:37.351554  487755 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 20:11:37.351535  487755 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 20:11:37.353261  487755 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 20:11:37.353341  487755 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 20:11:37.353363  487755 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 20:11:37.353372  487755 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 20:11:37.353363  487755 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 20:11:37.353364  487755 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 20:11:37.353421  487755 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 20:11:37.353428  487755 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 20:11:37.513905  487755 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 20:11:37.518236  487755 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 20:11:37.520046  487755 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 20:11:37.522661  487755 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 20:11:37.535858  487755 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 20:11:37.563263  487755 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 20:11:37.579160  487755 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 20:11:37.579216  487755 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 20:11:37.579271  487755 ssh_runner.go:195] Run: which crictl
	I0819 20:11:37.622730  487755 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 20:11:37.631623  487755 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 20:11:37.631667  487755 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 20:11:37.631713  487755 ssh_runner.go:195] Run: which crictl
	I0819 20:11:37.655185  487755 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 20:11:37.655237  487755 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 20:11:37.655289  487755 ssh_runner.go:195] Run: which crictl
	I0819 20:11:37.670307  487755 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 20:11:37.670355  487755 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 20:11:37.670407  487755 ssh_runner.go:195] Run: which crictl
	I0819 20:11:37.684249  487755 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 20:11:37.684305  487755 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 20:11:37.684359  487755 ssh_runner.go:195] Run: which crictl
	I0819 20:11:37.704836  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 20:11:37.704913  487755 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 20:11:37.704962  487755 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 20:11:37.705010  487755 ssh_runner.go:195] Run: which crictl
	I0819 20:11:37.705308  487755 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 20:11:37.705348  487755 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 20:11:37.705357  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 20:11:37.705383  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 20:11:37.705387  487755 ssh_runner.go:195] Run: which crictl
	I0819 20:11:37.714260  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 20:11:37.714294  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 20:11:37.714462  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 20:11:37.801638  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 20:11:37.801659  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 20:11:37.862882  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 20:11:37.862912  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 20:11:37.881683  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 20:11:37.895852  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 20:11:37.895930  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 20:11:37.895895  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 20:11:37.895968  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 20:11:35.213362  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:35.213942  486861 main.go:141] libmachine: (no-preload-944514) DBG | unable to find current IP address of domain no-preload-944514 in network mk-no-preload-944514
	I0819 20:11:35.213971  486861 main.go:141] libmachine: (no-preload-944514) DBG | I0819 20:11:35.213892  488606 retry.go:31] will retry after 856.748342ms: waiting for machine to come up
	I0819 20:11:36.072372  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:36.073037  486861 main.go:141] libmachine: (no-preload-944514) DBG | unable to find current IP address of domain no-preload-944514 in network mk-no-preload-944514
	I0819 20:11:36.073071  486861 main.go:141] libmachine: (no-preload-944514) DBG | I0819 20:11:36.072974  488606 retry.go:31] will retry after 1.366635615s: waiting for machine to come up
	I0819 20:11:37.440787  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:37.441235  486861 main.go:141] libmachine: (no-preload-944514) DBG | unable to find current IP address of domain no-preload-944514 in network mk-no-preload-944514
	I0819 20:11:37.441266  486861 main.go:141] libmachine: (no-preload-944514) DBG | I0819 20:11:37.441188  488606 retry.go:31] will retry after 1.149068087s: waiting for machine to come up
	I0819 20:11:38.592552  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:38.593183  486861 main.go:141] libmachine: (no-preload-944514) DBG | unable to find current IP address of domain no-preload-944514 in network mk-no-preload-944514
	I0819 20:11:38.593219  486861 main.go:141] libmachine: (no-preload-944514) DBG | I0819 20:11:38.593113  488606 retry.go:31] will retry after 1.930642733s: waiting for machine to come up
	I0819 20:11:35.732540  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:11:37.882440  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:11:38.011518  487755 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 20:11:38.016910  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 20:11:38.016911  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 20:11:38.041148  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 20:11:38.054084  487755 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 20:11:38.054172  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 20:11:38.062753  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 20:11:38.062874  487755 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 20:11:38.299616  487755 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 20:11:38.299686  487755 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 20:11:38.299730  487755 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 20:11:38.299805  487755 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 20:11:38.299878  487755 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 20:11:38.299888  487755 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 20:11:38.299955  487755 cache_images.go:92] duration metric: took 948.528583ms to LoadCachedImages
	W0819 20:11:38.300024  487755 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0819 20:11:38.300037  487755 kubeadm.go:934] updating node { 192.168.39.213 8443 v1.20.0 crio true true} ...
	I0819 20:11:38.300134  487755 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-968990 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-968990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 20:11:38.300193  487755 ssh_runner.go:195] Run: crio config
	I0819 20:11:38.354044  487755 cni.go:84] Creating CNI manager for ""
	I0819 20:11:38.354067  487755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 20:11:38.354076  487755 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 20:11:38.354096  487755 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.213 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-968990 NodeName:old-k8s-version-968990 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 20:11:38.354302  487755 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-968990"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 20:11:38.354386  487755 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 20:11:38.364427  487755 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 20:11:38.364515  487755 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 20:11:38.374295  487755 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0819 20:11:38.391990  487755 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 20:11:38.409800  487755 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0819 20:11:38.428224  487755 ssh_runner.go:195] Run: grep 192.168.39.213	control-plane.minikube.internal$ /etc/hosts
	I0819 20:11:38.432117  487755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 20:11:38.445200  487755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:11:38.558168  487755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 20:11:38.576121  487755 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/old-k8s-version-968990 for IP: 192.168.39.213
	I0819 20:11:38.576148  487755 certs.go:194] generating shared ca certs ...
	I0819 20:11:38.576171  487755 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:11:38.576362  487755 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 20:11:38.576424  487755 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 20:11:38.576436  487755 certs.go:256] generating profile certs ...
	I0819 20:11:38.576558  487755 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/old-k8s-version-968990/client.key
	I0819 20:11:38.576630  487755 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/old-k8s-version-968990/apiserver.key.52abce5e
	I0819 20:11:38.576678  487755 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/old-k8s-version-968990/proxy-client.key
	I0819 20:11:38.576836  487755 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 20:11:38.576872  487755 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 20:11:38.576882  487755 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 20:11:38.576952  487755 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 20:11:38.576983  487755 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 20:11:38.577012  487755 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 20:11:38.577071  487755 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 20:11:38.577740  487755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 20:11:38.605780  487755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 20:11:38.639660  487755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 20:11:38.681724  487755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 20:11:38.718636  487755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/old-k8s-version-968990/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 20:11:38.756292  487755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/old-k8s-version-968990/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 20:11:38.802586  487755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/old-k8s-version-968990/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 20:11:38.835855  487755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/old-k8s-version-968990/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 20:11:38.862058  487755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 20:11:38.890102  487755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 20:11:38.916318  487755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 20:11:38.942199  487755 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 20:11:38.961170  487755 ssh_runner.go:195] Run: openssl version
	I0819 20:11:38.967396  487755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 20:11:38.978735  487755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:11:38.983922  487755 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:11:38.983997  487755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:11:38.989966  487755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 20:11:39.001388  487755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 20:11:39.013644  487755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 20:11:39.018916  487755 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 20:11:39.018981  487755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 20:11:39.026600  487755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 20:11:39.041190  487755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 20:11:39.053469  487755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 20:11:39.058443  487755 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 20:11:39.058537  487755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 20:11:39.064741  487755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 20:11:39.078225  487755 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 20:11:39.083112  487755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 20:11:39.089378  487755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 20:11:39.095497  487755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 20:11:39.103912  487755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 20:11:39.112217  487755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 20:11:39.120226  487755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 20:11:39.126416  487755 kubeadm.go:392] StartCluster: {Name:old-k8s-version-968990 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-968990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:11:39.126530  487755 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 20:11:39.126599  487755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 20:11:39.162460  487755 cri.go:89] found id: ""
	I0819 20:11:39.162539  487755 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 20:11:39.173099  487755 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 20:11:39.173122  487755 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 20:11:39.173191  487755 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 20:11:39.183104  487755 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 20:11:39.184151  487755 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-968990" does not appear in /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 20:11:39.184900  487755 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-430949/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-968990" cluster setting kubeconfig missing "old-k8s-version-968990" context setting]
	I0819 20:11:39.185828  487755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/kubeconfig: {Name:mk1ae3aa741bb2460fc0d48f80867b991b4a0677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:11:39.197949  487755 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 20:11:39.208125  487755 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.213
	I0819 20:11:39.208170  487755 kubeadm.go:1160] stopping kube-system containers ...
	I0819 20:11:39.208185  487755 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 20:11:39.208250  487755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 20:11:39.245736  487755 cri.go:89] found id: ""
	I0819 20:11:39.245810  487755 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 20:11:39.263948  487755 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 20:11:39.275275  487755 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 20:11:39.275302  487755 kubeadm.go:157] found existing configuration files:
	
	I0819 20:11:39.275360  487755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 20:11:39.284860  487755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 20:11:39.284934  487755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 20:11:39.296348  487755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 20:11:39.305935  487755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 20:11:39.306026  487755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 20:11:39.316094  487755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 20:11:39.325756  487755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 20:11:39.325828  487755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 20:11:39.336088  487755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 20:11:39.345487  487755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 20:11:39.345569  487755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 20:11:39.355709  487755 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 20:11:39.367989  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 20:11:39.497837  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 20:11:40.264614  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 20:11:40.511166  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 20:11:40.643997  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 20:11:40.725960  487755 api_server.go:52] waiting for apiserver process to appear ...
	I0819 20:11:40.726056  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:41.227704  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:41.726116  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:42.226120  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:42.726733  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:40.525261  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:40.525721  486861 main.go:141] libmachine: (no-preload-944514) DBG | unable to find current IP address of domain no-preload-944514 in network mk-no-preload-944514
	I0819 20:11:40.525750  486861 main.go:141] libmachine: (no-preload-944514) DBG | I0819 20:11:40.525682  488606 retry.go:31] will retry after 2.599037177s: waiting for machine to come up
	I0819 20:11:43.126878  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:43.127334  486861 main.go:141] libmachine: (no-preload-944514) DBG | unable to find current IP address of domain no-preload-944514 in network mk-no-preload-944514
	I0819 20:11:43.127357  486861 main.go:141] libmachine: (no-preload-944514) DBG | I0819 20:11:43.127290  488606 retry.go:31] will retry after 3.460762799s: waiting for machine to come up
	I0819 20:11:40.229338  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:11:42.230322  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:11:44.728626  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:11:43.226629  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:43.726763  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:44.226124  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:44.727142  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:45.226200  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:45.726122  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:46.226230  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:46.726871  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:47.226950  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:47.726814  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:46.728779  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:11:48.729226  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:11:46.589243  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:46.589813  486861 main.go:141] libmachine: (no-preload-944514) DBG | unable to find current IP address of domain no-preload-944514 in network mk-no-preload-944514
	I0819 20:11:46.589843  486861 main.go:141] libmachine: (no-preload-944514) DBG | I0819 20:11:46.589749  488606 retry.go:31] will retry after 4.335531104s: waiting for machine to come up
	I0819 20:11:48.226767  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:48.726373  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:49.226526  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:49.726370  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:50.226966  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:50.726125  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:51.226420  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:51.726952  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:52.226469  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:52.727092  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:50.926656  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:50.927124  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has current primary IP address 192.168.61.196 and MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:50.927144  486861 main.go:141] libmachine: (no-preload-944514) Found IP for machine: 192.168.61.196
	I0819 20:11:50.927157  486861 main.go:141] libmachine: (no-preload-944514) Reserving static IP address...
	I0819 20:11:50.927562  486861 main.go:141] libmachine: (no-preload-944514) DBG | found host DHCP lease matching {name: "no-preload-944514", mac: "52:54:00:b6:5d:93", ip: "192.168.61.196"} in network mk-no-preload-944514: {Iface:virbr1 ExpiryTime:2024-08-19 21:11:42 +0000 UTC Type:0 Mac:52:54:00:b6:5d:93 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-944514 Clientid:01:52:54:00:b6:5d:93}
	I0819 20:11:50.927591  486861 main.go:141] libmachine: (no-preload-944514) Reserved static IP address: 192.168.61.196
	I0819 20:11:50.927611  486861 main.go:141] libmachine: (no-preload-944514) DBG | skip adding static IP to network mk-no-preload-944514 - found existing host DHCP lease matching {name: "no-preload-944514", mac: "52:54:00:b6:5d:93", ip: "192.168.61.196"}
	I0819 20:11:50.927628  486861 main.go:141] libmachine: (no-preload-944514) DBG | Getting to WaitForSSH function...
	I0819 20:11:50.927643  486861 main.go:141] libmachine: (no-preload-944514) Waiting for SSH to be available...
	I0819 20:11:50.930359  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:50.930730  486861 main.go:141] libmachine: (no-preload-944514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5d:93", ip: ""} in network mk-no-preload-944514: {Iface:virbr1 ExpiryTime:2024-08-19 21:11:42 +0000 UTC Type:0 Mac:52:54:00:b6:5d:93 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-944514 Clientid:01:52:54:00:b6:5d:93}
	I0819 20:11:50.930779  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined IP address 192.168.61.196 and MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:50.930935  486861 main.go:141] libmachine: (no-preload-944514) DBG | Using SSH client type: external
	I0819 20:11:50.930967  486861 main.go:141] libmachine: (no-preload-944514) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/no-preload-944514/id_rsa (-rw-------)
	I0819 20:11:50.931002  486861 main.go:141] libmachine: (no-preload-944514) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-430949/.minikube/machines/no-preload-944514/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 20:11:50.931020  486861 main.go:141] libmachine: (no-preload-944514) DBG | About to run SSH command:
	I0819 20:11:50.931035  486861 main.go:141] libmachine: (no-preload-944514) DBG | exit 0
	I0819 20:11:51.053174  486861 main.go:141] libmachine: (no-preload-944514) DBG | SSH cmd err, output: <nil>: 
	I0819 20:11:51.053494  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetConfigRaw
	I0819 20:11:51.054174  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetIP
	I0819 20:11:51.056747  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:51.057088  486861 main.go:141] libmachine: (no-preload-944514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5d:93", ip: ""} in network mk-no-preload-944514: {Iface:virbr1 ExpiryTime:2024-08-19 21:11:42 +0000 UTC Type:0 Mac:52:54:00:b6:5d:93 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-944514 Clientid:01:52:54:00:b6:5d:93}
	I0819 20:11:51.057121  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined IP address 192.168.61.196 and MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:51.057372  486861 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/no-preload-944514/config.json ...
	I0819 20:11:51.057615  486861 machine.go:93] provisionDockerMachine start ...
	I0819 20:11:51.057633  486861 main.go:141] libmachine: (no-preload-944514) Calling .DriverName
	I0819 20:11:51.057839  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHHostname
	I0819 20:11:51.060218  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:51.060547  486861 main.go:141] libmachine: (no-preload-944514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5d:93", ip: ""} in network mk-no-preload-944514: {Iface:virbr1 ExpiryTime:2024-08-19 21:11:42 +0000 UTC Type:0 Mac:52:54:00:b6:5d:93 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-944514 Clientid:01:52:54:00:b6:5d:93}
	I0819 20:11:51.060576  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined IP address 192.168.61.196 and MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:51.060701  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHPort
	I0819 20:11:51.060909  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHKeyPath
	I0819 20:11:51.061094  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHKeyPath
	I0819 20:11:51.061274  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHUsername
	I0819 20:11:51.061460  486861 main.go:141] libmachine: Using SSH client type: native
	I0819 20:11:51.061646  486861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0819 20:11:51.061657  486861 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 20:11:51.161675  486861 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 20:11:51.161707  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetMachineName
	I0819 20:11:51.161977  486861 buildroot.go:166] provisioning hostname "no-preload-944514"
	I0819 20:11:51.162006  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetMachineName
	I0819 20:11:51.162237  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHHostname
	I0819 20:11:51.165205  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:51.165560  486861 main.go:141] libmachine: (no-preload-944514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5d:93", ip: ""} in network mk-no-preload-944514: {Iface:virbr1 ExpiryTime:2024-08-19 21:11:42 +0000 UTC Type:0 Mac:52:54:00:b6:5d:93 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-944514 Clientid:01:52:54:00:b6:5d:93}
	I0819 20:11:51.165591  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined IP address 192.168.61.196 and MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:51.165766  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHPort
	I0819 20:11:51.165976  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHKeyPath
	I0819 20:11:51.166196  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHKeyPath
	I0819 20:11:51.166349  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHUsername
	I0819 20:11:51.166517  486861 main.go:141] libmachine: Using SSH client type: native
	I0819 20:11:51.166706  486861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0819 20:11:51.166723  486861 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-944514 && echo "no-preload-944514" | sudo tee /etc/hostname
	I0819 20:11:51.279557  486861 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-944514
	
	I0819 20:11:51.279597  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHHostname
	I0819 20:11:51.282645  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:51.283040  486861 main.go:141] libmachine: (no-preload-944514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5d:93", ip: ""} in network mk-no-preload-944514: {Iface:virbr1 ExpiryTime:2024-08-19 21:11:42 +0000 UTC Type:0 Mac:52:54:00:b6:5d:93 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-944514 Clientid:01:52:54:00:b6:5d:93}
	I0819 20:11:51.283098  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined IP address 192.168.61.196 and MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:51.283211  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHPort
	I0819 20:11:51.283437  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHKeyPath
	I0819 20:11:51.283644  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHKeyPath
	I0819 20:11:51.283832  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHUsername
	I0819 20:11:51.284001  486861 main.go:141] libmachine: Using SSH client type: native
	I0819 20:11:51.284213  486861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0819 20:11:51.284238  486861 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-944514' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-944514/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-944514' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 20:11:51.394366  486861 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 20:11:51.394398  486861 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 20:11:51.394418  486861 buildroot.go:174] setting up certificates
	I0819 20:11:51.394426  486861 provision.go:84] configureAuth start
	I0819 20:11:51.394436  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetMachineName
	I0819 20:11:51.394800  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetIP
	I0819 20:11:51.397624  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:51.398077  486861 main.go:141] libmachine: (no-preload-944514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5d:93", ip: ""} in network mk-no-preload-944514: {Iface:virbr1 ExpiryTime:2024-08-19 21:11:42 +0000 UTC Type:0 Mac:52:54:00:b6:5d:93 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-944514 Clientid:01:52:54:00:b6:5d:93}
	I0819 20:11:51.398105  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined IP address 192.168.61.196 and MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:51.398315  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHHostname
	I0819 20:11:51.400591  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:51.400886  486861 main.go:141] libmachine: (no-preload-944514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5d:93", ip: ""} in network mk-no-preload-944514: {Iface:virbr1 ExpiryTime:2024-08-19 21:11:42 +0000 UTC Type:0 Mac:52:54:00:b6:5d:93 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-944514 Clientid:01:52:54:00:b6:5d:93}
	I0819 20:11:51.400911  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined IP address 192.168.61.196 and MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:51.401068  486861 provision.go:143] copyHostCerts
	I0819 20:11:51.401156  486861 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 20:11:51.401180  486861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 20:11:51.401250  486861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 20:11:51.401405  486861 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 20:11:51.401420  486861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 20:11:51.401451  486861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 20:11:51.401541  486861 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 20:11:51.401553  486861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 20:11:51.401578  486861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 20:11:51.401645  486861 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.no-preload-944514 san=[127.0.0.1 192.168.61.196 localhost minikube no-preload-944514]
	I0819 20:11:51.542071  486861 provision.go:177] copyRemoteCerts
	I0819 20:11:51.542129  486861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 20:11:51.542154  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHHostname
	I0819 20:11:51.544883  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:51.545313  486861 main.go:141] libmachine: (no-preload-944514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5d:93", ip: ""} in network mk-no-preload-944514: {Iface:virbr1 ExpiryTime:2024-08-19 21:11:42 +0000 UTC Type:0 Mac:52:54:00:b6:5d:93 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-944514 Clientid:01:52:54:00:b6:5d:93}
	I0819 20:11:51.545349  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined IP address 192.168.61.196 and MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:51.545594  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHPort
	I0819 20:11:51.545789  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHKeyPath
	I0819 20:11:51.545942  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHUsername
	I0819 20:11:51.546105  486861 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/no-preload-944514/id_rsa Username:docker}
	I0819 20:11:51.623134  486861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 20:11:51.648658  486861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 20:11:51.673609  486861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 20:11:51.698791  486861 provision.go:87] duration metric: took 304.350917ms to configureAuth
	I0819 20:11:51.698822  486861 buildroot.go:189] setting minikube options for container-runtime
	I0819 20:11:51.699078  486861 config.go:182] Loaded profile config "no-preload-944514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:11:51.699177  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHHostname
	I0819 20:11:51.701814  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:51.702163  486861 main.go:141] libmachine: (no-preload-944514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5d:93", ip: ""} in network mk-no-preload-944514: {Iface:virbr1 ExpiryTime:2024-08-19 21:11:42 +0000 UTC Type:0 Mac:52:54:00:b6:5d:93 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-944514 Clientid:01:52:54:00:b6:5d:93}
	I0819 20:11:51.702193  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined IP address 192.168.61.196 and MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:51.702528  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHPort
	I0819 20:11:51.702759  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHKeyPath
	I0819 20:11:51.702965  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHKeyPath
	I0819 20:11:51.703149  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHUsername
	I0819 20:11:51.703324  486861 main.go:141] libmachine: Using SSH client type: native
	I0819 20:11:51.703497  486861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0819 20:11:51.703513  486861 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 20:11:51.957988  486861 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 20:11:51.958017  486861 machine.go:96] duration metric: took 900.389563ms to provisionDockerMachine
	I0819 20:11:51.958029  486861 start.go:293] postStartSetup for "no-preload-944514" (driver="kvm2")
	I0819 20:11:51.958039  486861 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 20:11:51.958056  486861 main.go:141] libmachine: (no-preload-944514) Calling .DriverName
	I0819 20:11:51.958392  486861 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 20:11:51.958429  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHHostname
	I0819 20:11:51.960883  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:51.961221  486861 main.go:141] libmachine: (no-preload-944514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5d:93", ip: ""} in network mk-no-preload-944514: {Iface:virbr1 ExpiryTime:2024-08-19 21:11:42 +0000 UTC Type:0 Mac:52:54:00:b6:5d:93 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-944514 Clientid:01:52:54:00:b6:5d:93}
	I0819 20:11:51.961251  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined IP address 192.168.61.196 and MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:51.961453  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHPort
	I0819 20:11:51.961676  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHKeyPath
	I0819 20:11:51.962014  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHUsername
	I0819 20:11:51.962166  486861 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/no-preload-944514/id_rsa Username:docker}
	I0819 20:11:52.040447  486861 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 20:11:52.044796  486861 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 20:11:52.044827  486861 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 20:11:52.044902  486861 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 20:11:52.044973  486861 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 20:11:52.045062  486861 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 20:11:52.055615  486861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 20:11:52.080334  486861 start.go:296] duration metric: took 122.287304ms for postStartSetup
	I0819 20:11:52.080389  486861 fix.go:56] duration metric: took 21.13432357s for fixHost
	I0819 20:11:52.080416  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHHostname
	I0819 20:11:52.083170  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:52.083509  486861 main.go:141] libmachine: (no-preload-944514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5d:93", ip: ""} in network mk-no-preload-944514: {Iface:virbr1 ExpiryTime:2024-08-19 21:11:42 +0000 UTC Type:0 Mac:52:54:00:b6:5d:93 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-944514 Clientid:01:52:54:00:b6:5d:93}
	I0819 20:11:52.083534  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined IP address 192.168.61.196 and MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:52.083711  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHPort
	I0819 20:11:52.083947  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHKeyPath
	I0819 20:11:52.084123  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHKeyPath
	I0819 20:11:52.084290  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHUsername
	I0819 20:11:52.084464  486861 main.go:141] libmachine: Using SSH client type: native
	I0819 20:11:52.084651  486861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0819 20:11:52.084662  486861 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 20:11:52.186060  486861 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724098312.139556236
	
	I0819 20:11:52.186083  486861 fix.go:216] guest clock: 1724098312.139556236
	I0819 20:11:52.186091  486861 fix.go:229] Guest: 2024-08-19 20:11:52.139556236 +0000 UTC Remote: 2024-08-19 20:11:52.080394823 +0000 UTC m=+337.251052124 (delta=59.161413ms)
	I0819 20:11:52.186111  486861 fix.go:200] guest clock delta is within tolerance: 59.161413ms
	I0819 20:11:52.186117  486861 start.go:83] releasing machines lock for "no-preload-944514", held for 21.240079909s
	I0819 20:11:52.186133  486861 main.go:141] libmachine: (no-preload-944514) Calling .DriverName
	I0819 20:11:52.186422  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetIP
	I0819 20:11:52.189228  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:52.189614  486861 main.go:141] libmachine: (no-preload-944514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5d:93", ip: ""} in network mk-no-preload-944514: {Iface:virbr1 ExpiryTime:2024-08-19 21:11:42 +0000 UTC Type:0 Mac:52:54:00:b6:5d:93 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-944514 Clientid:01:52:54:00:b6:5d:93}
	I0819 20:11:52.189637  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined IP address 192.168.61.196 and MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:52.189862  486861 main.go:141] libmachine: (no-preload-944514) Calling .DriverName
	I0819 20:11:52.190421  486861 main.go:141] libmachine: (no-preload-944514) Calling .DriverName
	I0819 20:11:52.190632  486861 main.go:141] libmachine: (no-preload-944514) Calling .DriverName
	I0819 20:11:52.190737  486861 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 20:11:52.190821  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHHostname
	I0819 20:11:52.190852  486861 ssh_runner.go:195] Run: cat /version.json
	I0819 20:11:52.190872  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHHostname
	I0819 20:11:52.193624  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:52.193831  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:52.193962  486861 main.go:141] libmachine: (no-preload-944514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5d:93", ip: ""} in network mk-no-preload-944514: {Iface:virbr1 ExpiryTime:2024-08-19 21:11:42 +0000 UTC Type:0 Mac:52:54:00:b6:5d:93 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-944514 Clientid:01:52:54:00:b6:5d:93}
	I0819 20:11:52.193988  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined IP address 192.168.61.196 and MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:52.194162  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHPort
	I0819 20:11:52.194285  486861 main.go:141] libmachine: (no-preload-944514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5d:93", ip: ""} in network mk-no-preload-944514: {Iface:virbr1 ExpiryTime:2024-08-19 21:11:42 +0000 UTC Type:0 Mac:52:54:00:b6:5d:93 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-944514 Clientid:01:52:54:00:b6:5d:93}
	I0819 20:11:52.194313  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined IP address 192.168.61.196 and MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:52.194382  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHKeyPath
	I0819 20:11:52.194442  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHPort
	I0819 20:11:52.194551  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHUsername
	I0819 20:11:52.194624  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHKeyPath
	I0819 20:11:52.194708  486861 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/no-preload-944514/id_rsa Username:docker}
	I0819 20:11:52.194751  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHUsername
	I0819 20:11:52.194881  486861 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/no-preload-944514/id_rsa Username:docker}
	I0819 20:11:52.270470  486861 ssh_runner.go:195] Run: systemctl --version
	I0819 20:11:52.292825  486861 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 20:11:52.436169  486861 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 20:11:52.442163  486861 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 20:11:52.442252  486861 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 20:11:52.459855  486861 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 20:11:52.459890  486861 start.go:495] detecting cgroup driver to use...
	I0819 20:11:52.459967  486861 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 20:11:52.476788  486861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 20:11:52.491677  486861 docker.go:217] disabling cri-docker service (if available) ...
	I0819 20:11:52.491754  486861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 20:11:52.506729  486861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 20:11:52.521842  486861 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 20:11:52.639992  486861 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 20:11:52.808016  486861 docker.go:233] disabling docker service ...
	I0819 20:11:52.808107  486861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 20:11:52.823212  486861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 20:11:52.837213  486861 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 20:11:52.957923  486861 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 20:11:53.098415  486861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 20:11:53.112399  486861 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 20:11:53.131013  486861 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 20:11:53.131084  486861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:11:53.141923  486861 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 20:11:53.142000  486861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:11:53.153101  486861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:11:53.164410  486861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:11:53.176059  486861 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 20:11:53.188249  486861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:11:53.200987  486861 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:11:53.219073  486861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:11:53.231767  486861 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 20:11:53.241999  486861 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 20:11:53.242076  486861 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 20:11:53.256568  486861 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 20:11:53.266985  486861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:11:53.398038  486861 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 20:11:53.541754  486861 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 20:11:53.541836  486861 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 20:11:53.546731  486861 start.go:563] Will wait 60s for crictl version
	I0819 20:11:53.546801  486861 ssh_runner.go:195] Run: which crictl
	I0819 20:11:53.550445  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 20:11:53.591947  486861 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 20:11:53.592041  486861 ssh_runner.go:195] Run: crio --version
	I0819 20:11:53.619827  486861 ssh_runner.go:195] Run: crio --version
	I0819 20:11:53.649886  486861 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 20:11:53.650974  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetIP
	I0819 20:11:53.653559  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:53.653868  486861 main.go:141] libmachine: (no-preload-944514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5d:93", ip: ""} in network mk-no-preload-944514: {Iface:virbr1 ExpiryTime:2024-08-19 21:11:42 +0000 UTC Type:0 Mac:52:54:00:b6:5d:93 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-944514 Clientid:01:52:54:00:b6:5d:93}
	I0819 20:11:53.653915  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined IP address 192.168.61.196 and MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:11:53.654162  486861 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0819 20:11:53.658326  486861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 20:11:53.670649  486861 kubeadm.go:883] updating cluster {Name:no-preload-944514 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-944514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.196 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 20:11:53.670787  486861 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 20:11:53.670830  486861 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 20:11:53.703928  486861 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 20:11:53.703957  486861 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 20:11:53.703999  486861 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 20:11:53.704068  486861 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 20:11:53.704091  486861 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 20:11:53.704098  486861 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 20:11:53.704182  486861 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 20:11:53.704232  486861 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 20:11:53.704068  486861 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 20:11:53.704276  486861 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 20:11:53.705633  486861 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 20:11:53.705696  486861 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 20:11:53.705633  486861 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 20:11:53.705633  486861 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 20:11:53.705638  486861 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 20:11:53.705910  486861 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 20:11:53.705645  486861 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 20:11:53.705642  486861 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 20:11:53.875443  486861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 20:11:53.880893  486861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0819 20:11:53.881298  486861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0819 20:11:53.885052  486861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0819 20:11:53.892405  486861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0819 20:11:53.894243  486861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0819 20:11:53.959959  486861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0819 20:11:53.963641  486861 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0819 20:11:53.963691  486861 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 20:11:53.963739  486861 ssh_runner.go:195] Run: which crictl
	I0819 20:11:54.036878  486861 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0819 20:11:54.036939  486861 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 20:11:54.036942  486861 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0819 20:11:54.036982  486861 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 20:11:54.036993  486861 ssh_runner.go:195] Run: which crictl
	I0819 20:11:54.037030  486861 ssh_runner.go:195] Run: which crictl
	I0819 20:11:54.054359  486861 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0819 20:11:54.054427  486861 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 20:11:54.054490  486861 ssh_runner.go:195] Run: which crictl
	I0819 20:11:54.064098  486861 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0819 20:11:54.064147  486861 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 20:11:54.064204  486861 ssh_runner.go:195] Run: which crictl
	I0819 20:11:54.075194  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 20:11:54.075227  486861 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0819 20:11:54.075269  486861 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0819 20:11:54.075283  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 20:11:54.075303  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 20:11:54.075312  486861 ssh_runner.go:195] Run: which crictl
	I0819 20:11:54.075393  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 20:11:54.075403  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 20:11:54.175946  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 20:11:54.189088  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 20:11:54.189096  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 20:11:54.189180  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 20:11:54.189180  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 20:11:54.189240  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 20:11:54.269574  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 20:11:54.320998  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 20:11:54.329960  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 20:11:54.336790  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 20:11:54.336809  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 20:11:54.336929  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 20:11:54.366826  486861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 20:11:54.366969  486861 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 20:11:54.406480  486861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 20:11:54.437533  486861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 20:11:54.437632  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 20:11:54.437650  486861 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0819 20:11:54.476698  486861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 20:11:54.476698  486861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 20:11:54.476787  486861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 20:11:54.476827  486861 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0819 20:11:54.476836  486861 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 20:11:54.476844  486861 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 20:11:54.476872  486861 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 20:11:54.476877  486861 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 20:11:54.476833  486861 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 20:11:54.531589  486861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0819 20:11:54.531657  486861 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0819 20:11:54.531704  486861 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0819 20:11:54.531705  486861 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0819 20:11:54.531752  486861 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 20:11:54.531797  486861 ssh_runner.go:195] Run: which crictl
	I0819 20:11:54.531713  486861 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0819 20:11:51.230594  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:11:53.728601  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:11:53.226198  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:53.726617  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:54.227115  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:54.726372  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:55.226807  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:55.726453  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:56.226400  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:56.726977  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:57.227107  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:57.726910  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:56.987231  486861 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.510323052s)
	I0819 20:11:56.987287  486861 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0819 20:11:56.987287  486861 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.510377822s)
	I0819 20:11:56.987312  486861 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0819 20:11:56.987329  486861 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0819 20:11:56.987347  486861 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.510418829s)
	I0819 20:11:56.987377  486861 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0819 20:11:56.987402  486861 ssh_runner.go:235] Completed: which crictl: (2.455583474s)
	I0819 20:11:56.987428  486861 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.455543377s)
	I0819 20:11:56.987378  486861 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0819 20:11:56.987449  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 20:11:56.987459  486861 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0819 20:11:58.880302  486861 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.892897197s)
	I0819 20:11:58.880341  486861 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0819 20:11:58.880368  486861 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 20:11:58.880377  486861 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.892905035s)
	I0819 20:11:58.880432  486861 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 20:11:58.880442  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 20:11:58.918430  486861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 20:11:55.729888  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:11:58.229521  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:11:58.227103  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:58.726180  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:59.226207  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:11:59.727092  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:00.226164  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:00.726922  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:01.226965  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:01.727068  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:02.227178  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:02.726900  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:00.258572  486861 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.378109099s)
	I0819 20:12:00.258612  486861 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.340153974s)
	I0819 20:12:00.258617  486861 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0819 20:12:00.258648  486861 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 20:12:00.258653  486861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 20:12:00.258691  486861 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 20:12:00.258745  486861 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 20:12:02.308928  486861 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.050210222s)
	I0819 20:12:02.308972  486861 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0819 20:12:02.308998  486861 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 20:12:02.309017  486861 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.05024672s)
	I0819 20:12:02.309050  486861 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0819 20:12:02.309061  486861 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 20:12:04.267220  486861 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.958132057s)
	I0819 20:12:04.267273  486861 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0819 20:12:04.267308  486861 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0819 20:12:04.267375  486861 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0819 20:12:00.229796  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:02.728878  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:04.729112  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:03.226145  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:03.727036  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:04.226323  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:04.727085  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:05.226985  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:05.727122  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:06.226220  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:06.727100  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:07.226687  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:07.726910  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:07.988568  486861 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.721164506s)
	I0819 20:12:07.988651  486861 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0819 20:12:07.988705  486861 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 20:12:07.988785  486861 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0819 20:12:08.657917  486861 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 20:12:08.657965  486861 cache_images.go:123] Successfully loaded all cached images
	I0819 20:12:08.657971  486861 cache_images.go:92] duration metric: took 14.954002156s to LoadCachedImages
	I0819 20:12:08.657986  486861 kubeadm.go:934] updating node { 192.168.61.196 8443 v1.31.0 crio true true} ...
	I0819 20:12:08.658158  486861 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-944514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-944514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 20:12:08.658247  486861 ssh_runner.go:195] Run: crio config
	I0819 20:12:08.706687  486861 cni.go:84] Creating CNI manager for ""
	I0819 20:12:08.706712  486861 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 20:12:08.706730  486861 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 20:12:08.706756  486861 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.196 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-944514 NodeName:no-preload-944514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 20:12:08.706919  486861 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-944514"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 20:12:08.706996  486861 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 20:12:08.717428  486861 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 20:12:08.717531  486861 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 20:12:08.727381  486861 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 20:12:08.746568  486861 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 20:12:08.763368  486861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0819 20:12:08.781109  486861 ssh_runner.go:195] Run: grep 192.168.61.196	control-plane.minikube.internal$ /etc/hosts
	I0819 20:12:08.785091  486861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 20:12:08.797259  486861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:12:08.933120  486861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 20:12:08.961829  486861 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/no-preload-944514 for IP: 192.168.61.196
	I0819 20:12:08.961857  486861 certs.go:194] generating shared ca certs ...
	I0819 20:12:08.961894  486861 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:12:08.962088  486861 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 20:12:08.962152  486861 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 20:12:08.962165  486861 certs.go:256] generating profile certs ...
	I0819 20:12:08.962251  486861 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/no-preload-944514/client.key
	I0819 20:12:08.962310  486861 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/no-preload-944514/apiserver.key.bf9c13c5
	I0819 20:12:08.962343  486861 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/no-preload-944514/proxy-client.key
	I0819 20:12:08.962456  486861 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 20:12:08.962484  486861 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 20:12:08.962500  486861 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 20:12:08.962523  486861 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 20:12:08.962548  486861 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 20:12:08.962569  486861 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 20:12:08.962623  486861 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 20:12:08.963600  486861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 20:12:09.003970  486861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 20:12:09.043966  486861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 20:12:09.080376  486861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 20:12:09.113398  486861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/no-preload-944514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 20:12:09.145509  486861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/no-preload-944514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 20:12:09.170080  486861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/no-preload-944514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 20:12:09.194328  486861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/no-preload-944514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 20:12:09.219407  486861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 20:12:09.246433  486861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 20:12:09.272073  486861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 20:12:09.297501  486861 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 20:12:09.316460  486861 ssh_runner.go:195] Run: openssl version
	I0819 20:12:09.322856  486861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 20:12:09.334803  486861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 20:12:09.339762  486861 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 20:12:09.339852  486861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 20:12:09.345921  486861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 20:12:09.356951  486861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 20:12:09.368636  486861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 20:12:09.375030  486861 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 20:12:09.375109  486861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 20:12:09.382851  486861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 20:12:09.393951  486861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 20:12:09.405591  486861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:12:09.410851  486861 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:12:09.410920  486861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:12:09.416805  486861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 20:12:09.428041  486861 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 20:12:09.432814  486861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 20:12:09.438882  486861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 20:12:09.445055  486861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 20:12:09.451120  486861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 20:12:09.457205  486861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 20:12:09.463316  486861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 20:12:09.469381  486861 kubeadm.go:392] StartCluster: {Name:no-preload-944514 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-944514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.196 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:12:09.469490  486861 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 20:12:09.469553  486861 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 20:12:09.510784  486861 cri.go:89] found id: ""
	I0819 20:12:09.510858  486861 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 20:12:09.521423  486861 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 20:12:09.521449  486861 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 20:12:09.521505  486861 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 20:12:09.531345  486861 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 20:12:09.532575  486861 kubeconfig.go:125] found "no-preload-944514" server: "https://192.168.61.196:8443"
	I0819 20:12:09.536639  486861 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 20:12:09.546804  486861 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.196
	I0819 20:12:09.546840  486861 kubeadm.go:1160] stopping kube-system containers ...
	I0819 20:12:09.546853  486861 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 20:12:09.546919  486861 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 20:12:09.581306  486861 cri.go:89] found id: ""
	I0819 20:12:09.581391  486861 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 20:12:09.602299  486861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 20:12:09.612431  486861 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 20:12:09.612462  486861 kubeadm.go:157] found existing configuration files:
	
	I0819 20:12:09.612514  486861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 20:12:09.621930  486861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 20:12:09.622000  486861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 20:12:09.631627  486861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 20:12:09.640959  486861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 20:12:09.641054  486861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 20:12:09.650666  486861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 20:12:09.659908  486861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 20:12:09.659975  486861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 20:12:09.670188  486861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 20:12:09.681101  486861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 20:12:09.681199  486861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 20:12:09.692151  486861 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 20:12:09.702301  486861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 20:12:09.806722  486861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 20:12:06.729491  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:08.730279  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:08.226861  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:08.726920  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:09.227075  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:09.726975  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:10.226990  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:10.726147  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:11.226154  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:11.726324  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:12.226366  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:12.726235  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:10.538569  486861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 20:12:10.742661  486861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 20:12:10.810039  486861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 20:12:10.896353  486861 api_server.go:52] waiting for apiserver process to appear ...
	I0819 20:12:10.896461  486861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:11.396663  486861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:11.897468  486861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:11.913397  486861 api_server.go:72] duration metric: took 1.017063408s to wait for apiserver process to appear ...
	I0819 20:12:11.913426  486861 api_server.go:88] waiting for apiserver healthz status ...
	I0819 20:12:11.913445  486861 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0819 20:12:14.186622  486861 api_server.go:279] https://192.168.61.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 20:12:14.186658  486861 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 20:12:14.186680  486861 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0819 20:12:14.220594  486861 api_server.go:279] https://192.168.61.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 20:12:14.220637  486861 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 20:12:14.413962  486861 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0819 20:12:14.419462  486861 api_server.go:279] https://192.168.61.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:12:14.419511  486861 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:12:11.229126  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:13.729585  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:14.914506  486861 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0819 20:12:14.921322  486861 api_server.go:279] https://192.168.61.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:12:14.921357  486861 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:12:15.414249  486861 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0819 20:12:15.425151  486861 api_server.go:279] https://192.168.61.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:12:15.425204  486861 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:12:15.914359  486861 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0819 20:12:15.919937  486861 api_server.go:279] https://192.168.61.196:8443/healthz returned 200:
	ok
	I0819 20:12:15.929686  486861 api_server.go:141] control plane version: v1.31.0
	I0819 20:12:15.929726  486861 api_server.go:131] duration metric: took 4.016292378s to wait for apiserver health ...
	I0819 20:12:15.929738  486861 cni.go:84] Creating CNI manager for ""
	I0819 20:12:15.929745  486861 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 20:12:15.931740  486861 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 20:12:13.226887  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:13.727060  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:14.226356  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:14.726375  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:15.227064  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:15.727138  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:16.226402  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:16.726888  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:17.227111  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:17.727172  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:15.933183  486861 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 20:12:15.946243  486861 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 20:12:15.975276  486861 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 20:12:15.986252  486861 system_pods.go:59] 8 kube-system pods found
	I0819 20:12:15.986306  486861 system_pods.go:61] "coredns-6f6b679f8f-cr2cr" [4a461064-9da2-4c96-9709-4ad4fe690834] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 20:12:15.986319  486861 system_pods.go:61] "etcd-no-preload-944514" [ac9b74f2-acfa-469d-926c-6036c959da97] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 20:12:15.986330  486861 system_pods.go:61] "kube-apiserver-no-preload-944514" [25665722-100c-4c99-8514-b715a4dc36bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 20:12:15.986338  486861 system_pods.go:61] "kube-controller-manager-no-preload-944514" [cb1b2fcf-1f84-42ea-9684-b3933d8cf885] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 20:12:15.986346  486861 system_pods.go:61] "kube-proxy-w4p9f" [b4bc6b34-d43d-4fcd-8d36-c6f7a601dc42] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 20:12:15.986353  486861 system_pods.go:61] "kube-scheduler-no-preload-944514" [d69da1de-2433-4bcb-baf6-6a95dcb0c7f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 20:12:15.986361  486861 system_pods.go:61] "metrics-server-6867b74b74-pwvmg" [e91529b8-1820-4f7c-8968-811116aba783] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 20:12:15.986368  486861 system_pods.go:61] "storage-provisioner" [80895eb3-a3a0-45dc-9f09-0bd17a129407] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 20:12:15.986377  486861 system_pods.go:74] duration metric: took 11.07502ms to wait for pod list to return data ...
	I0819 20:12:15.986386  486861 node_conditions.go:102] verifying NodePressure condition ...
	I0819 20:12:15.991225  486861 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 20:12:15.991262  486861 node_conditions.go:123] node cpu capacity is 2
	I0819 20:12:15.991277  486861 node_conditions.go:105] duration metric: took 4.886301ms to run NodePressure ...
	I0819 20:12:15.991300  486861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 20:12:16.323043  486861 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 20:12:16.328595  486861 kubeadm.go:739] kubelet initialised
	I0819 20:12:16.328628  486861 kubeadm.go:740] duration metric: took 5.553697ms waiting for restarted kubelet to initialise ...
	I0819 20:12:16.328640  486861 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 20:12:16.336174  486861 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-cr2cr" in "kube-system" namespace to be "Ready" ...
	I0819 20:12:16.343863  486861 pod_ready.go:98] node "no-preload-944514" hosting pod "coredns-6f6b679f8f-cr2cr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944514" has status "Ready":"False"
	I0819 20:12:16.343897  486861 pod_ready.go:82] duration metric: took 7.690064ms for pod "coredns-6f6b679f8f-cr2cr" in "kube-system" namespace to be "Ready" ...
	E0819 20:12:16.343910  486861 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944514" hosting pod "coredns-6f6b679f8f-cr2cr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944514" has status "Ready":"False"
	I0819 20:12:16.343921  486861 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-944514" in "kube-system" namespace to be "Ready" ...
	I0819 20:12:16.355578  486861 pod_ready.go:98] node "no-preload-944514" hosting pod "etcd-no-preload-944514" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944514" has status "Ready":"False"
	I0819 20:12:16.355614  486861 pod_ready.go:82] duration metric: took 11.681799ms for pod "etcd-no-preload-944514" in "kube-system" namespace to be "Ready" ...
	E0819 20:12:16.355627  486861 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944514" hosting pod "etcd-no-preload-944514" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944514" has status "Ready":"False"
	I0819 20:12:16.355637  486861 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-944514" in "kube-system" namespace to be "Ready" ...
	I0819 20:12:16.364197  486861 pod_ready.go:98] node "no-preload-944514" hosting pod "kube-apiserver-no-preload-944514" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944514" has status "Ready":"False"
	I0819 20:12:16.364227  486861 pod_ready.go:82] duration metric: took 8.582107ms for pod "kube-apiserver-no-preload-944514" in "kube-system" namespace to be "Ready" ...
	E0819 20:12:16.364240  486861 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944514" hosting pod "kube-apiserver-no-preload-944514" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944514" has status "Ready":"False"
	I0819 20:12:16.364250  486861 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-944514" in "kube-system" namespace to be "Ready" ...
	I0819 20:12:16.379399  486861 pod_ready.go:98] node "no-preload-944514" hosting pod "kube-controller-manager-no-preload-944514" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944514" has status "Ready":"False"
	I0819 20:12:16.379436  486861 pod_ready.go:82] duration metric: took 15.174722ms for pod "kube-controller-manager-no-preload-944514" in "kube-system" namespace to be "Ready" ...
	E0819 20:12:16.379447  486861 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944514" hosting pod "kube-controller-manager-no-preload-944514" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944514" has status "Ready":"False"
	I0819 20:12:16.379457  486861 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-w4p9f" in "kube-system" namespace to be "Ready" ...
	I0819 20:12:16.778096  486861 pod_ready.go:93] pod "kube-proxy-w4p9f" in "kube-system" namespace has status "Ready":"True"
	I0819 20:12:16.778120  486861 pod_ready.go:82] duration metric: took 398.653985ms for pod "kube-proxy-w4p9f" in "kube-system" namespace to be "Ready" ...
	I0819 20:12:16.778130  486861 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-944514" in "kube-system" namespace to be "Ready" ...
	I0819 20:12:18.784323  486861 pod_ready.go:103] pod "kube-scheduler-no-preload-944514" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:16.229365  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:18.730155  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:18.226559  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:18.726447  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:19.226614  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:19.726630  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:20.227022  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:20.726361  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:21.226257  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:21.726528  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:22.227024  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:22.727079  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:20.788688  486861 pod_ready.go:103] pod "kube-scheduler-no-preload-944514" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:23.284148  486861 pod_ready.go:103] pod "kube-scheduler-no-preload-944514" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:21.229504  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:23.229986  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:23.226376  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:23.726731  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:24.226152  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:24.726357  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:25.226868  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:25.726196  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:26.226983  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:26.726364  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:27.226951  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:27.726815  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:25.284725  486861 pod_ready.go:103] pod "kube-scheduler-no-preload-944514" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:25.784884  486861 pod_ready.go:93] pod "kube-scheduler-no-preload-944514" in "kube-system" namespace has status "Ready":"True"
	I0819 20:12:25.784911  486861 pod_ready.go:82] duration metric: took 9.00677483s for pod "kube-scheduler-no-preload-944514" in "kube-system" namespace to be "Ready" ...
	I0819 20:12:25.784921  486861 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace to be "Ready" ...
	I0819 20:12:27.795743  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:25.729729  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:28.228742  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:28.226909  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:28.726869  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:29.226769  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:29.727011  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:30.226213  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:30.727160  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:31.227167  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:31.726350  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:32.226260  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:32.726371  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:30.291421  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:32.291846  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:34.791125  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:30.229678  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:32.728884  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:34.729290  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:33.227082  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:33.726622  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:34.226379  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:34.726389  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:35.226381  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:35.726362  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:36.226347  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:36.726998  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:37.227006  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:37.726385  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:36.791481  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:39.290972  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:36.729412  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:39.228941  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:38.226899  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:38.726550  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:39.226866  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:39.726387  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:40.226262  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:40.727190  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:12:40.727286  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:12:40.766480  487755 cri.go:89] found id: ""
	I0819 20:12:40.766513  487755 logs.go:276] 0 containers: []
	W0819 20:12:40.766522  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:12:40.766527  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:12:40.766590  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:12:40.799702  487755 cri.go:89] found id: ""
	I0819 20:12:40.799742  487755 logs.go:276] 0 containers: []
	W0819 20:12:40.799754  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:12:40.799761  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:12:40.799833  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:12:40.832918  487755 cri.go:89] found id: ""
	I0819 20:12:40.832949  487755 logs.go:276] 0 containers: []
	W0819 20:12:40.832957  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:12:40.832965  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:12:40.833054  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:12:40.869836  487755 cri.go:89] found id: ""
	I0819 20:12:40.869865  487755 logs.go:276] 0 containers: []
	W0819 20:12:40.869878  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:12:40.869886  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:12:40.869965  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:12:40.903184  487755 cri.go:89] found id: ""
	I0819 20:12:40.903211  487755 logs.go:276] 0 containers: []
	W0819 20:12:40.903221  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:12:40.903227  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:12:40.903282  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:12:40.942220  487755 cri.go:89] found id: ""
	I0819 20:12:40.942249  487755 logs.go:276] 0 containers: []
	W0819 20:12:40.942257  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:12:40.942264  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:12:40.942317  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:12:40.977890  487755 cri.go:89] found id: ""
	I0819 20:12:40.977920  487755 logs.go:276] 0 containers: []
	W0819 20:12:40.977929  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:12:40.977935  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:12:40.977992  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:12:41.018227  487755 cri.go:89] found id: ""
	I0819 20:12:41.018257  487755 logs.go:276] 0 containers: []
	W0819 20:12:41.018265  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:12:41.018274  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:12:41.018291  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:12:41.055983  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:12:41.056015  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:12:41.107039  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:12:41.107083  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:12:41.120249  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:12:41.120285  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:12:41.242814  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:12:41.242843  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:12:41.242859  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:12:41.790644  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:43.792140  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:41.729439  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:44.229456  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:43.823672  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:43.836804  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:12:43.836891  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:12:43.875320  487755 cri.go:89] found id: ""
	I0819 20:12:43.875349  487755 logs.go:276] 0 containers: []
	W0819 20:12:43.875358  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:12:43.875364  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:12:43.875423  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:12:43.913898  487755 cri.go:89] found id: ""
	I0819 20:12:43.913927  487755 logs.go:276] 0 containers: []
	W0819 20:12:43.913936  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:12:43.913942  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:12:43.913997  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:12:43.970650  487755 cri.go:89] found id: ""
	I0819 20:12:43.970694  487755 logs.go:276] 0 containers: []
	W0819 20:12:43.970706  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:12:43.970714  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:12:43.970784  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:12:44.005752  487755 cri.go:89] found id: ""
	I0819 20:12:44.005777  487755 logs.go:276] 0 containers: []
	W0819 20:12:44.005787  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:12:44.005793  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:12:44.005859  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:12:44.039113  487755 cri.go:89] found id: ""
	I0819 20:12:44.039144  487755 logs.go:276] 0 containers: []
	W0819 20:12:44.039157  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:12:44.039165  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:12:44.039230  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:12:44.076015  487755 cri.go:89] found id: ""
	I0819 20:12:44.076049  487755 logs.go:276] 0 containers: []
	W0819 20:12:44.076059  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:12:44.076065  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:12:44.076121  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:12:44.114387  487755 cri.go:89] found id: ""
	I0819 20:12:44.114416  487755 logs.go:276] 0 containers: []
	W0819 20:12:44.114425  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:12:44.114430  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:12:44.114487  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:12:44.147118  487755 cri.go:89] found id: ""
	I0819 20:12:44.147145  487755 logs.go:276] 0 containers: []
	W0819 20:12:44.147155  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:12:44.147168  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:12:44.147184  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:12:44.198102  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:12:44.198145  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:12:44.212342  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:12:44.212373  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:12:44.294472  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:12:44.294503  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:12:44.294519  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:12:44.373825  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:12:44.373867  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:12:46.915138  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:46.928351  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:12:46.928447  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:12:46.961552  487755 cri.go:89] found id: ""
	I0819 20:12:46.961590  487755 logs.go:276] 0 containers: []
	W0819 20:12:46.961602  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:12:46.961610  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:12:46.961684  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:12:46.998368  487755 cri.go:89] found id: ""
	I0819 20:12:46.998401  487755 logs.go:276] 0 containers: []
	W0819 20:12:46.998410  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:12:46.998416  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:12:46.998468  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:12:47.031867  487755 cri.go:89] found id: ""
	I0819 20:12:47.031908  487755 logs.go:276] 0 containers: []
	W0819 20:12:47.031921  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:12:47.031929  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:12:47.032047  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:12:47.065611  487755 cri.go:89] found id: ""
	I0819 20:12:47.065638  487755 logs.go:276] 0 containers: []
	W0819 20:12:47.065648  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:12:47.065654  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:12:47.065714  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:12:47.105509  487755 cri.go:89] found id: ""
	I0819 20:12:47.105538  487755 logs.go:276] 0 containers: []
	W0819 20:12:47.105547  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:12:47.105552  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:12:47.105610  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:12:47.139503  487755 cri.go:89] found id: ""
	I0819 20:12:47.139556  487755 logs.go:276] 0 containers: []
	W0819 20:12:47.139570  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:12:47.139578  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:12:47.139642  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:12:47.173202  487755 cri.go:89] found id: ""
	I0819 20:12:47.173235  487755 logs.go:276] 0 containers: []
	W0819 20:12:47.173245  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:12:47.173252  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:12:47.173311  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:12:47.207759  487755 cri.go:89] found id: ""
	I0819 20:12:47.207792  487755 logs.go:276] 0 containers: []
	W0819 20:12:47.207801  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:12:47.207812  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:12:47.207826  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:12:47.222938  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:12:47.222971  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:12:47.296448  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:12:47.296471  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:12:47.296488  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:12:47.382346  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:12:47.382391  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:12:47.421459  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:12:47.421492  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:12:46.291124  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:48.292079  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:46.728706  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:48.729805  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:49.973509  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:49.986827  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:12:49.986901  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:12:50.022391  487755 cri.go:89] found id: ""
	I0819 20:12:50.022427  487755 logs.go:276] 0 containers: []
	W0819 20:12:50.022439  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:12:50.022447  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:12:50.022534  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:12:50.055865  487755 cri.go:89] found id: ""
	I0819 20:12:50.055897  487755 logs.go:276] 0 containers: []
	W0819 20:12:50.055906  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:12:50.055912  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:12:50.055970  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:12:50.091090  487755 cri.go:89] found id: ""
	I0819 20:12:50.091128  487755 logs.go:276] 0 containers: []
	W0819 20:12:50.091141  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:12:50.091150  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:12:50.091223  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:12:50.125624  487755 cri.go:89] found id: ""
	I0819 20:12:50.125652  487755 logs.go:276] 0 containers: []
	W0819 20:12:50.125661  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:12:50.125667  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:12:50.125726  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:12:50.159405  487755 cri.go:89] found id: ""
	I0819 20:12:50.159444  487755 logs.go:276] 0 containers: []
	W0819 20:12:50.159456  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:12:50.159465  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:12:50.159527  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:12:50.195853  487755 cri.go:89] found id: ""
	I0819 20:12:50.195887  487755 logs.go:276] 0 containers: []
	W0819 20:12:50.195899  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:12:50.195908  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:12:50.195981  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:12:50.231861  487755 cri.go:89] found id: ""
	I0819 20:12:50.231890  487755 logs.go:276] 0 containers: []
	W0819 20:12:50.231899  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:12:50.231907  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:12:50.231972  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:12:50.266012  487755 cri.go:89] found id: ""
	I0819 20:12:50.266044  487755 logs.go:276] 0 containers: []
	W0819 20:12:50.266056  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:12:50.266072  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:12:50.266087  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:12:50.352249  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:12:50.352295  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:12:50.416074  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:12:50.416111  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:12:50.473274  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:12:50.473318  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:12:50.486491  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:12:50.486524  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:12:50.563891  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:12:50.292153  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:52.790951  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:50.732107  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:53.231294  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:53.065074  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:53.079852  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:12:53.079928  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:12:53.113338  487755 cri.go:89] found id: ""
	I0819 20:12:53.113368  487755 logs.go:276] 0 containers: []
	W0819 20:12:53.113378  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:12:53.113387  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:12:53.113455  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:12:53.149364  487755 cri.go:89] found id: ""
	I0819 20:12:53.149395  487755 logs.go:276] 0 containers: []
	W0819 20:12:53.149404  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:12:53.149411  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:12:53.149469  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:12:53.183909  487755 cri.go:89] found id: ""
	I0819 20:12:53.183941  487755 logs.go:276] 0 containers: []
	W0819 20:12:53.183952  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:12:53.183958  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:12:53.184014  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:12:53.219668  487755 cri.go:89] found id: ""
	I0819 20:12:53.219695  487755 logs.go:276] 0 containers: []
	W0819 20:12:53.219704  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:12:53.219710  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:12:53.219763  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:12:53.257915  487755 cri.go:89] found id: ""
	I0819 20:12:53.257964  487755 logs.go:276] 0 containers: []
	W0819 20:12:53.257977  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:12:53.257986  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:12:53.258053  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:12:53.294845  487755 cri.go:89] found id: ""
	I0819 20:12:53.294870  487755 logs.go:276] 0 containers: []
	W0819 20:12:53.294878  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:12:53.294886  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:12:53.294946  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:12:53.330544  487755 cri.go:89] found id: ""
	I0819 20:12:53.330571  487755 logs.go:276] 0 containers: []
	W0819 20:12:53.330580  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:12:53.330588  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:12:53.330657  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:12:53.365392  487755 cri.go:89] found id: ""
	I0819 20:12:53.365428  487755 logs.go:276] 0 containers: []
	W0819 20:12:53.365440  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:12:53.365453  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:12:53.365469  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:12:53.420014  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:12:53.420060  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:12:53.438273  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:12:53.438309  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:12:53.503640  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:12:53.503672  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:12:53.503690  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:12:53.579823  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:12:53.579871  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:12:56.119113  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:56.134877  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:12:56.134955  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:12:56.177744  487755 cri.go:89] found id: ""
	I0819 20:12:56.177781  487755 logs.go:276] 0 containers: []
	W0819 20:12:56.177794  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:12:56.177803  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:12:56.177876  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:12:56.214669  487755 cri.go:89] found id: ""
	I0819 20:12:56.214703  487755 logs.go:276] 0 containers: []
	W0819 20:12:56.214714  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:12:56.214722  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:12:56.214787  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:12:56.247770  487755 cri.go:89] found id: ""
	I0819 20:12:56.247810  487755 logs.go:276] 0 containers: []
	W0819 20:12:56.247825  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:12:56.247835  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:12:56.247911  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:12:56.282517  487755 cri.go:89] found id: ""
	I0819 20:12:56.282550  487755 logs.go:276] 0 containers: []
	W0819 20:12:56.282562  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:12:56.282569  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:12:56.282626  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:12:56.317359  487755 cri.go:89] found id: ""
	I0819 20:12:56.317385  487755 logs.go:276] 0 containers: []
	W0819 20:12:56.317395  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:12:56.317402  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:12:56.317463  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:12:56.354393  487755 cri.go:89] found id: ""
	I0819 20:12:56.354424  487755 logs.go:276] 0 containers: []
	W0819 20:12:56.354433  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:12:56.354441  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:12:56.354528  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:12:56.387145  487755 cri.go:89] found id: ""
	I0819 20:12:56.387176  487755 logs.go:276] 0 containers: []
	W0819 20:12:56.387187  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:12:56.387196  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:12:56.387266  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:12:56.421395  487755 cri.go:89] found id: ""
	I0819 20:12:56.421423  487755 logs.go:276] 0 containers: []
	W0819 20:12:56.421432  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:12:56.421441  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:12:56.421454  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:12:56.497399  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:12:56.497444  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:12:56.534968  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:12:56.535015  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:12:56.589163  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:12:56.589211  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:12:56.602811  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:12:56.602844  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:12:56.671182  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:12:55.291758  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:57.791727  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:59.792671  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:55.728973  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:58.229263  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:12:59.172171  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:12:59.186592  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:12:59.186674  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:12:59.222447  487755 cri.go:89] found id: ""
	I0819 20:12:59.222485  487755 logs.go:276] 0 containers: []
	W0819 20:12:59.222497  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:12:59.222505  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:12:59.222590  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:12:59.257181  487755 cri.go:89] found id: ""
	I0819 20:12:59.257215  487755 logs.go:276] 0 containers: []
	W0819 20:12:59.257224  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:12:59.257231  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:12:59.257299  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:12:59.293249  487755 cri.go:89] found id: ""
	I0819 20:12:59.293273  487755 logs.go:276] 0 containers: []
	W0819 20:12:59.293282  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:12:59.293288  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:12:59.293351  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:12:59.329933  487755 cri.go:89] found id: ""
	I0819 20:12:59.329969  487755 logs.go:276] 0 containers: []
	W0819 20:12:59.329981  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:12:59.329990  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:12:59.330056  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:12:59.364889  487755 cri.go:89] found id: ""
	I0819 20:12:59.364919  487755 logs.go:276] 0 containers: []
	W0819 20:12:59.364926  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:12:59.364932  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:12:59.364990  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:12:59.399074  487755 cri.go:89] found id: ""
	I0819 20:12:59.399104  487755 logs.go:276] 0 containers: []
	W0819 20:12:59.399115  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:12:59.399123  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:12:59.399195  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:12:59.434629  487755 cri.go:89] found id: ""
	I0819 20:12:59.434663  487755 logs.go:276] 0 containers: []
	W0819 20:12:59.434674  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:12:59.434682  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:12:59.434779  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:12:59.468253  487755 cri.go:89] found id: ""
	I0819 20:12:59.468286  487755 logs.go:276] 0 containers: []
	W0819 20:12:59.468309  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:12:59.468323  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:12:59.468354  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:12:59.518180  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:12:59.518227  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:12:59.531697  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:12:59.531738  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:12:59.604703  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:12:59.604733  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:12:59.604750  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:12:59.687755  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:12:59.687812  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:02.227305  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:13:02.240498  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:13:02.240572  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:13:02.278364  487755 cri.go:89] found id: ""
	I0819 20:13:02.278398  487755 logs.go:276] 0 containers: []
	W0819 20:13:02.278409  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:13:02.278417  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:13:02.278490  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:13:02.313107  487755 cri.go:89] found id: ""
	I0819 20:13:02.313161  487755 logs.go:276] 0 containers: []
	W0819 20:13:02.313175  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:13:02.313183  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:13:02.313245  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:13:02.348101  487755 cri.go:89] found id: ""
	I0819 20:13:02.348128  487755 logs.go:276] 0 containers: []
	W0819 20:13:02.348137  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:13:02.348143  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:13:02.348198  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:13:02.382472  487755 cri.go:89] found id: ""
	I0819 20:13:02.382505  487755 logs.go:276] 0 containers: []
	W0819 20:13:02.382517  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:13:02.382525  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:13:02.382601  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:13:02.417929  487755 cri.go:89] found id: ""
	I0819 20:13:02.417959  487755 logs.go:276] 0 containers: []
	W0819 20:13:02.417968  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:13:02.417973  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:13:02.418029  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:13:02.453247  487755 cri.go:89] found id: ""
	I0819 20:13:02.453283  487755 logs.go:276] 0 containers: []
	W0819 20:13:02.453294  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:13:02.453302  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:13:02.453373  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:13:02.491326  487755 cri.go:89] found id: ""
	I0819 20:13:02.491354  487755 logs.go:276] 0 containers: []
	W0819 20:13:02.491362  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:13:02.491368  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:13:02.491422  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:13:02.526642  487755 cri.go:89] found id: ""
	I0819 20:13:02.526686  487755 logs.go:276] 0 containers: []
	W0819 20:13:02.526698  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:13:02.526713  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:13:02.526734  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:13:02.542045  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:13:02.542094  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:13:02.611364  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:13:02.611392  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:13:02.611405  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:13:02.693632  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:13:02.693683  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:02.733078  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:13:02.733110  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:13:02.292226  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:04.791898  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:00.729585  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:02.729938  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:05.283376  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:13:05.296399  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:13:05.296481  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:13:05.333973  487755 cri.go:89] found id: ""
	I0819 20:13:05.334001  487755 logs.go:276] 0 containers: []
	W0819 20:13:05.334010  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:13:05.334016  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:13:05.334086  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:13:05.370427  487755 cri.go:89] found id: ""
	I0819 20:13:05.370457  487755 logs.go:276] 0 containers: []
	W0819 20:13:05.370469  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:13:05.370478  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:13:05.370550  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:13:05.403747  487755 cri.go:89] found id: ""
	I0819 20:13:05.403773  487755 logs.go:276] 0 containers: []
	W0819 20:13:05.403781  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:13:05.403788  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:13:05.403854  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:13:05.435977  487755 cri.go:89] found id: ""
	I0819 20:13:05.436006  487755 logs.go:276] 0 containers: []
	W0819 20:13:05.436017  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:13:05.436025  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:13:05.436094  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:13:05.469756  487755 cri.go:89] found id: ""
	I0819 20:13:05.469789  487755 logs.go:276] 0 containers: []
	W0819 20:13:05.469805  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:13:05.469811  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:13:05.469868  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:13:05.502248  487755 cri.go:89] found id: ""
	I0819 20:13:05.502277  487755 logs.go:276] 0 containers: []
	W0819 20:13:05.502285  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:13:05.502290  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:13:05.502343  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:13:05.535867  487755 cri.go:89] found id: ""
	I0819 20:13:05.535899  487755 logs.go:276] 0 containers: []
	W0819 20:13:05.535909  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:13:05.535914  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:13:05.535968  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:13:05.569727  487755 cri.go:89] found id: ""
	I0819 20:13:05.569755  487755 logs.go:276] 0 containers: []
	W0819 20:13:05.569825  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:13:05.569841  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:13:05.569860  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:13:05.645905  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:13:05.645926  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:13:05.645943  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:13:05.725049  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:13:05.725094  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:05.761464  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:13:05.761506  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:13:05.815167  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:13:05.815212  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:13:07.291649  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:09.791735  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:05.229204  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:07.729578  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:08.329276  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:13:08.342178  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:13:08.342250  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:13:08.382622  487755 cri.go:89] found id: ""
	I0819 20:13:08.382660  487755 logs.go:276] 0 containers: []
	W0819 20:13:08.382673  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:13:08.382682  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:13:08.382753  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:13:08.416246  487755 cri.go:89] found id: ""
	I0819 20:13:08.416279  487755 logs.go:276] 0 containers: []
	W0819 20:13:08.416288  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:13:08.416294  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:13:08.416365  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:13:08.449329  487755 cri.go:89] found id: ""
	I0819 20:13:08.449360  487755 logs.go:276] 0 containers: []
	W0819 20:13:08.449372  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:13:08.449381  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:13:08.449442  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:13:08.482307  487755 cri.go:89] found id: ""
	I0819 20:13:08.482342  487755 logs.go:276] 0 containers: []
	W0819 20:13:08.482354  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:13:08.482363  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:13:08.482431  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:13:08.515326  487755 cri.go:89] found id: ""
	I0819 20:13:08.515357  487755 logs.go:276] 0 containers: []
	W0819 20:13:08.515366  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:13:08.515373  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:13:08.515428  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:13:08.548884  487755 cri.go:89] found id: ""
	I0819 20:13:08.548921  487755 logs.go:276] 0 containers: []
	W0819 20:13:08.548934  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:13:08.548943  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:13:08.549013  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:13:08.582060  487755 cri.go:89] found id: ""
	I0819 20:13:08.582097  487755 logs.go:276] 0 containers: []
	W0819 20:13:08.582116  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:13:08.582124  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:13:08.582194  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:13:08.614991  487755 cri.go:89] found id: ""
	I0819 20:13:08.615029  487755 logs.go:276] 0 containers: []
	W0819 20:13:08.615041  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:13:08.615053  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:13:08.615069  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:13:08.665929  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:13:08.665970  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:13:08.679935  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:13:08.679967  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:13:08.750924  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:13:08.750950  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:13:08.750967  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:13:08.828718  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:13:08.828760  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:11.372462  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:13:11.385825  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:13:11.385904  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:13:11.419548  487755 cri.go:89] found id: ""
	I0819 20:13:11.419582  487755 logs.go:276] 0 containers: []
	W0819 20:13:11.419601  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:13:11.419610  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:13:11.419691  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:13:11.456625  487755 cri.go:89] found id: ""
	I0819 20:13:11.456662  487755 logs.go:276] 0 containers: []
	W0819 20:13:11.456673  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:13:11.456682  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:13:11.456739  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:13:11.490795  487755 cri.go:89] found id: ""
	I0819 20:13:11.490830  487755 logs.go:276] 0 containers: []
	W0819 20:13:11.490842  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:13:11.490850  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:13:11.490931  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:13:11.525043  487755 cri.go:89] found id: ""
	I0819 20:13:11.525075  487755 logs.go:276] 0 containers: []
	W0819 20:13:11.525087  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:13:11.525096  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:13:11.525173  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:13:11.559234  487755 cri.go:89] found id: ""
	I0819 20:13:11.559260  487755 logs.go:276] 0 containers: []
	W0819 20:13:11.559268  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:13:11.559275  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:13:11.559336  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:13:11.591985  487755 cri.go:89] found id: ""
	I0819 20:13:11.592013  487755 logs.go:276] 0 containers: []
	W0819 20:13:11.592022  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:13:11.592029  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:13:11.592085  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:13:11.626254  487755 cri.go:89] found id: ""
	I0819 20:13:11.626282  487755 logs.go:276] 0 containers: []
	W0819 20:13:11.626291  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:13:11.626297  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:13:11.626350  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:13:11.659222  487755 cri.go:89] found id: ""
	I0819 20:13:11.659256  487755 logs.go:276] 0 containers: []
	W0819 20:13:11.659279  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:13:11.659292  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:13:11.659305  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:13:11.672554  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:13:11.672590  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:13:11.746173  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:13:11.746198  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:13:11.746217  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:13:11.830235  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:13:11.830279  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:11.874872  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:13:11.874913  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:13:11.791909  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:13.797543  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:10.229018  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:12.229153  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:14.229364  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:14.426724  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:13:14.440038  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:13:14.440130  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:13:14.471916  487755 cri.go:89] found id: ""
	I0819 20:13:14.471948  487755 logs.go:276] 0 containers: []
	W0819 20:13:14.471959  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:13:14.471967  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:13:14.472038  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:13:14.504857  487755 cri.go:89] found id: ""
	I0819 20:13:14.504901  487755 logs.go:276] 0 containers: []
	W0819 20:13:14.504929  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:13:14.504942  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:13:14.505095  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:13:14.537213  487755 cri.go:89] found id: ""
	I0819 20:13:14.537241  487755 logs.go:276] 0 containers: []
	W0819 20:13:14.537252  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:13:14.537267  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:13:14.537340  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:13:14.569274  487755 cri.go:89] found id: ""
	I0819 20:13:14.569305  487755 logs.go:276] 0 containers: []
	W0819 20:13:14.569315  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:13:14.569323  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:13:14.569390  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:13:14.603713  487755 cri.go:89] found id: ""
	I0819 20:13:14.603749  487755 logs.go:276] 0 containers: []
	W0819 20:13:14.603760  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:13:14.603768  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:13:14.603838  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:13:14.636013  487755 cri.go:89] found id: ""
	I0819 20:13:14.636045  487755 logs.go:276] 0 containers: []
	W0819 20:13:14.636054  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:13:14.636060  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:13:14.636117  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:13:14.672283  487755 cri.go:89] found id: ""
	I0819 20:13:14.672322  487755 logs.go:276] 0 containers: []
	W0819 20:13:14.672334  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:13:14.672342  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:13:14.672402  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:13:14.704195  487755 cri.go:89] found id: ""
	I0819 20:13:14.704221  487755 logs.go:276] 0 containers: []
	W0819 20:13:14.704231  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:13:14.704244  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:13:14.704261  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:13:14.756906  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:13:14.756951  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:13:14.769980  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:13:14.770018  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:13:14.837568  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:13:14.837599  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:13:14.837620  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:13:14.919781  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:13:14.919829  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:17.457826  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:13:17.470719  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:13:17.470786  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:13:17.505050  487755 cri.go:89] found id: ""
	I0819 20:13:17.505077  487755 logs.go:276] 0 containers: []
	W0819 20:13:17.505087  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:13:17.505097  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:13:17.505178  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:13:17.540882  487755 cri.go:89] found id: ""
	I0819 20:13:17.540923  487755 logs.go:276] 0 containers: []
	W0819 20:13:17.540943  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:13:17.540951  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:13:17.541023  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:13:17.576894  487755 cri.go:89] found id: ""
	I0819 20:13:17.576929  487755 logs.go:276] 0 containers: []
	W0819 20:13:17.576940  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:13:17.576949  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:13:17.577013  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:13:17.613491  487755 cri.go:89] found id: ""
	I0819 20:13:17.613524  487755 logs.go:276] 0 containers: []
	W0819 20:13:17.613536  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:13:17.613544  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:13:17.613622  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:13:17.651555  487755 cri.go:89] found id: ""
	I0819 20:13:17.651588  487755 logs.go:276] 0 containers: []
	W0819 20:13:17.651601  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:13:17.651608  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:13:17.651683  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:13:17.689467  487755 cri.go:89] found id: ""
	I0819 20:13:17.689504  487755 logs.go:276] 0 containers: []
	W0819 20:13:17.689516  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:13:17.689525  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:13:17.689616  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:13:17.735421  487755 cri.go:89] found id: ""
	I0819 20:13:17.735461  487755 logs.go:276] 0 containers: []
	W0819 20:13:17.735472  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:13:17.735480  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:13:17.735551  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:13:17.771627  487755 cri.go:89] found id: ""
	I0819 20:13:17.771658  487755 logs.go:276] 0 containers: []
	W0819 20:13:17.771670  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:13:17.771684  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:13:17.771700  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:13:17.830422  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:13:17.830466  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:13:17.846549  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:13:17.846591  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:13:17.920430  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:13:17.920457  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:13:17.920477  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:13:16.291036  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:18.291872  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:16.728776  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:19.228428  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:18.007106  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:13:18.007157  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:20.552555  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:13:20.565032  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:13:20.565121  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:13:20.598329  487755 cri.go:89] found id: ""
	I0819 20:13:20.598356  487755 logs.go:276] 0 containers: []
	W0819 20:13:20.598366  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:13:20.598373  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:13:20.598437  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:13:20.632496  487755 cri.go:89] found id: ""
	I0819 20:13:20.632523  487755 logs.go:276] 0 containers: []
	W0819 20:13:20.632531  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:13:20.632537  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:13:20.632596  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:13:20.665813  487755 cri.go:89] found id: ""
	I0819 20:13:20.665846  487755 logs.go:276] 0 containers: []
	W0819 20:13:20.665855  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:13:20.665861  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:13:20.665919  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:13:20.699725  487755 cri.go:89] found id: ""
	I0819 20:13:20.699750  487755 logs.go:276] 0 containers: []
	W0819 20:13:20.699759  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:13:20.699765  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:13:20.699821  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:13:20.738505  487755 cri.go:89] found id: ""
	I0819 20:13:20.738540  487755 logs.go:276] 0 containers: []
	W0819 20:13:20.738549  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:13:20.738555  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:13:20.738611  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:13:20.772181  487755 cri.go:89] found id: ""
	I0819 20:13:20.772214  487755 logs.go:276] 0 containers: []
	W0819 20:13:20.772225  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:13:20.772233  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:13:20.772292  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:13:20.807882  487755 cri.go:89] found id: ""
	I0819 20:13:20.807915  487755 logs.go:276] 0 containers: []
	W0819 20:13:20.807927  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:13:20.807936  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:13:20.808000  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:13:20.843649  487755 cri.go:89] found id: ""
	I0819 20:13:20.843680  487755 logs.go:276] 0 containers: []
	W0819 20:13:20.843691  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:13:20.843701  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:13:20.843714  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:13:20.923859  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:13:20.923915  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:20.966033  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:13:20.966067  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:13:21.016425  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:13:21.016474  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:13:21.030622  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:13:21.030656  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:13:21.107344  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:13:20.791430  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:22.791989  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:21.228677  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:23.728854  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:23.607673  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:13:23.620292  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:13:23.620369  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:13:23.655145  487755 cri.go:89] found id: ""
	I0819 20:13:23.655189  487755 logs.go:276] 0 containers: []
	W0819 20:13:23.655204  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:13:23.655213  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:13:23.655273  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:13:23.689644  487755 cri.go:89] found id: ""
	I0819 20:13:23.689679  487755 logs.go:276] 0 containers: []
	W0819 20:13:23.689690  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:13:23.689696  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:13:23.689764  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:13:23.723154  487755 cri.go:89] found id: ""
	I0819 20:13:23.723196  487755 logs.go:276] 0 containers: []
	W0819 20:13:23.723208  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:13:23.723215  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:13:23.723281  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:13:23.759178  487755 cri.go:89] found id: ""
	I0819 20:13:23.759206  487755 logs.go:276] 0 containers: []
	W0819 20:13:23.759214  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:13:23.759220  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:13:23.759288  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:13:23.795990  487755 cri.go:89] found id: ""
	I0819 20:13:23.796025  487755 logs.go:276] 0 containers: []
	W0819 20:13:23.796035  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:13:23.796043  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:13:23.796118  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:13:23.829795  487755 cri.go:89] found id: ""
	I0819 20:13:23.829834  487755 logs.go:276] 0 containers: []
	W0819 20:13:23.829847  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:13:23.829856  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:13:23.829926  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:13:23.864420  487755 cri.go:89] found id: ""
	I0819 20:13:23.864452  487755 logs.go:276] 0 containers: []
	W0819 20:13:23.864460  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:13:23.864466  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:13:23.864545  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:13:23.898946  487755 cri.go:89] found id: ""
	I0819 20:13:23.898978  487755 logs.go:276] 0 containers: []
	W0819 20:13:23.898989  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:13:23.899002  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:13:23.899018  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:23.935952  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:13:23.935989  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:13:23.987499  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:13:23.987547  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:13:24.000762  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:13:24.000796  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:13:24.071349  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:13:24.071378  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:13:24.071393  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:13:26.650001  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:13:26.663095  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:13:26.663174  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:13:26.703669  487755 cri.go:89] found id: ""
	I0819 20:13:26.703703  487755 logs.go:276] 0 containers: []
	W0819 20:13:26.703715  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:13:26.703721  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:13:26.703776  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:13:26.738214  487755 cri.go:89] found id: ""
	I0819 20:13:26.738248  487755 logs.go:276] 0 containers: []
	W0819 20:13:26.738259  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:13:26.738264  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:13:26.738318  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:13:26.772931  487755 cri.go:89] found id: ""
	I0819 20:13:26.772966  487755 logs.go:276] 0 containers: []
	W0819 20:13:26.772977  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:13:26.772985  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:13:26.773062  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:13:26.807090  487755 cri.go:89] found id: ""
	I0819 20:13:26.807124  487755 logs.go:276] 0 containers: []
	W0819 20:13:26.807135  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:13:26.807143  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:13:26.807213  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:13:26.839971  487755 cri.go:89] found id: ""
	I0819 20:13:26.839999  487755 logs.go:276] 0 containers: []
	W0819 20:13:26.840009  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:13:26.840015  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:13:26.840078  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:13:26.879459  487755 cri.go:89] found id: ""
	I0819 20:13:26.879486  487755 logs.go:276] 0 containers: []
	W0819 20:13:26.879494  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:13:26.879511  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:13:26.879569  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:13:26.912226  487755 cri.go:89] found id: ""
	I0819 20:13:26.912260  487755 logs.go:276] 0 containers: []
	W0819 20:13:26.912271  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:13:26.912277  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:13:26.912334  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:13:26.945112  487755 cri.go:89] found id: ""
	I0819 20:13:26.945159  487755 logs.go:276] 0 containers: []
	W0819 20:13:26.945169  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:13:26.945179  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:13:26.945191  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:13:26.993448  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:13:26.993545  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:13:27.006790  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:13:27.006823  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:13:27.076066  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:13:27.076088  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:13:27.076100  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:13:27.157939  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:13:27.158049  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:25.291750  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:27.790904  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:29.792695  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:26.228416  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:28.229309  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:29.698569  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:13:29.713216  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:13:29.713288  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:13:29.751323  487755 cri.go:89] found id: ""
	I0819 20:13:29.751354  487755 logs.go:276] 0 containers: []
	W0819 20:13:29.751363  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:13:29.751369  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:13:29.751424  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:13:29.786486  487755 cri.go:89] found id: ""
	I0819 20:13:29.786521  487755 logs.go:276] 0 containers: []
	W0819 20:13:29.786530  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:13:29.786535  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:13:29.786612  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:13:29.823419  487755 cri.go:89] found id: ""
	I0819 20:13:29.823447  487755 logs.go:276] 0 containers: []
	W0819 20:13:29.823458  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:13:29.823471  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:13:29.823543  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:13:29.858147  487755 cri.go:89] found id: ""
	I0819 20:13:29.858177  487755 logs.go:276] 0 containers: []
	W0819 20:13:29.858190  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:13:29.858198  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:13:29.858264  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:13:29.895938  487755 cri.go:89] found id: ""
	I0819 20:13:29.895976  487755 logs.go:276] 0 containers: []
	W0819 20:13:29.895988  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:13:29.895996  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:13:29.896071  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:13:29.946778  487755 cri.go:89] found id: ""
	I0819 20:13:29.946809  487755 logs.go:276] 0 containers: []
	W0819 20:13:29.946821  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:13:29.946830  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:13:29.946888  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:13:29.980487  487755 cri.go:89] found id: ""
	I0819 20:13:29.980536  487755 logs.go:276] 0 containers: []
	W0819 20:13:29.980546  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:13:29.980554  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:13:29.980629  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:13:30.014986  487755 cri.go:89] found id: ""
	I0819 20:13:30.015015  487755 logs.go:276] 0 containers: []
	W0819 20:13:30.015023  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:13:30.015034  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:13:30.015047  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:13:30.068273  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:13:30.068323  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:13:30.082861  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:13:30.082899  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:13:30.153489  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:13:30.153517  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:13:30.153533  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:13:30.237013  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:13:30.237075  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:32.784986  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:13:32.800888  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:13:32.800958  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:13:32.838697  487755 cri.go:89] found id: ""
	I0819 20:13:32.838726  487755 logs.go:276] 0 containers: []
	W0819 20:13:32.838736  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:13:32.838743  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:13:32.838814  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:13:32.873486  487755 cri.go:89] found id: ""
	I0819 20:13:32.873515  487755 logs.go:276] 0 containers: []
	W0819 20:13:32.873526  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:13:32.873534  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:13:32.873610  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:13:32.907012  487755 cri.go:89] found id: ""
	I0819 20:13:32.907047  487755 logs.go:276] 0 containers: []
	W0819 20:13:32.907062  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:13:32.907068  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:13:32.907138  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:13:32.942652  487755 cri.go:89] found id: ""
	I0819 20:13:32.942679  487755 logs.go:276] 0 containers: []
	W0819 20:13:32.942688  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:13:32.942694  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:13:32.942753  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:13:32.291481  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:34.791084  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:30.728425  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:32.728730  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:34.728984  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:32.976958  487755 cri.go:89] found id: ""
	I0819 20:13:32.976986  487755 logs.go:276] 0 containers: []
	W0819 20:13:32.976996  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:13:32.977002  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:13:32.977065  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:13:33.010325  487755 cri.go:89] found id: ""
	I0819 20:13:33.010359  487755 logs.go:276] 0 containers: []
	W0819 20:13:33.010372  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:13:33.010380  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:13:33.010447  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:13:33.043606  487755 cri.go:89] found id: ""
	I0819 20:13:33.043636  487755 logs.go:276] 0 containers: []
	W0819 20:13:33.043647  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:13:33.043654  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:13:33.043740  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:13:33.076723  487755 cri.go:89] found id: ""
	I0819 20:13:33.076750  487755 logs.go:276] 0 containers: []
	W0819 20:13:33.076759  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:13:33.076769  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:13:33.076783  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:13:33.129971  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:13:33.130018  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:13:33.143598  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:13:33.143636  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:13:33.210649  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:13:33.210675  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:13:33.210689  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:13:33.306645  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:13:33.306694  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:35.845670  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:13:35.860194  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:13:35.860266  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:13:35.895069  487755 cri.go:89] found id: ""
	I0819 20:13:35.895101  487755 logs.go:276] 0 containers: []
	W0819 20:13:35.895114  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:13:35.895122  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:13:35.895199  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:13:35.930171  487755 cri.go:89] found id: ""
	I0819 20:13:35.930199  487755 logs.go:276] 0 containers: []
	W0819 20:13:35.930207  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:13:35.930213  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:13:35.930279  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:13:35.963111  487755 cri.go:89] found id: ""
	I0819 20:13:35.963144  487755 logs.go:276] 0 containers: []
	W0819 20:13:35.963153  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:13:35.963159  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:13:35.963218  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:13:35.998024  487755 cri.go:89] found id: ""
	I0819 20:13:35.998058  487755 logs.go:276] 0 containers: []
	W0819 20:13:35.998071  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:13:35.998080  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:13:35.998150  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:13:36.033343  487755 cri.go:89] found id: ""
	I0819 20:13:36.033381  487755 logs.go:276] 0 containers: []
	W0819 20:13:36.033392  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:13:36.033401  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:13:36.033474  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:13:36.073621  487755 cri.go:89] found id: ""
	I0819 20:13:36.073650  487755 logs.go:276] 0 containers: []
	W0819 20:13:36.073662  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:13:36.073671  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:13:36.073735  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:13:36.111451  487755 cri.go:89] found id: ""
	I0819 20:13:36.111480  487755 logs.go:276] 0 containers: []
	W0819 20:13:36.111492  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:13:36.111507  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:13:36.111570  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:13:36.148619  487755 cri.go:89] found id: ""
	I0819 20:13:36.148654  487755 logs.go:276] 0 containers: []
	W0819 20:13:36.148667  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:13:36.148680  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:13:36.148697  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:36.185390  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:13:36.185429  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:13:36.236744  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:13:36.236791  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:13:36.251594  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:13:36.251628  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:13:36.324665  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:13:36.324687  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:13:36.324704  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:13:37.291071  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:39.293018  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:36.730129  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:39.230091  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:38.908523  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:13:38.922035  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:13:38.922105  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:13:38.956334  487755 cri.go:89] found id: ""
	I0819 20:13:38.956373  487755 logs.go:276] 0 containers: []
	W0819 20:13:38.956385  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:13:38.956394  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:13:38.956448  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:13:38.989774  487755 cri.go:89] found id: ""
	I0819 20:13:38.989817  487755 logs.go:276] 0 containers: []
	W0819 20:13:38.989826  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:13:38.989833  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:13:38.989887  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:13:39.027347  487755 cri.go:89] found id: ""
	I0819 20:13:39.027375  487755 logs.go:276] 0 containers: []
	W0819 20:13:39.027386  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:13:39.027394  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:13:39.027462  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:13:39.065193  487755 cri.go:89] found id: ""
	I0819 20:13:39.065226  487755 logs.go:276] 0 containers: []
	W0819 20:13:39.065236  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:13:39.065243  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:13:39.065297  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:13:39.099399  487755 cri.go:89] found id: ""
	I0819 20:13:39.099429  487755 logs.go:276] 0 containers: []
	W0819 20:13:39.099441  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:13:39.099449  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:13:39.099518  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:13:39.132151  487755 cri.go:89] found id: ""
	I0819 20:13:39.132179  487755 logs.go:276] 0 containers: []
	W0819 20:13:39.132189  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:13:39.132197  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:13:39.132257  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:13:39.165760  487755 cri.go:89] found id: ""
	I0819 20:13:39.165793  487755 logs.go:276] 0 containers: []
	W0819 20:13:39.165806  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:13:39.165812  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:13:39.165872  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:13:39.198885  487755 cri.go:89] found id: ""
	I0819 20:13:39.198919  487755 logs.go:276] 0 containers: []
	W0819 20:13:39.198931  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:13:39.198956  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:13:39.198984  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:13:39.271834  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:13:39.271858  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:13:39.271877  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:13:39.352576  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:13:39.352620  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:39.393101  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:13:39.393159  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:13:39.442462  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:13:39.442506  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:13:41.956582  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:13:41.970773  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:13:41.970842  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:13:42.005267  487755 cri.go:89] found id: ""
	I0819 20:13:42.005297  487755 logs.go:276] 0 containers: []
	W0819 20:13:42.005305  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:13:42.005311  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:13:42.005375  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:13:42.038994  487755 cri.go:89] found id: ""
	I0819 20:13:42.039027  487755 logs.go:276] 0 containers: []
	W0819 20:13:42.039035  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:13:42.039042  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:13:42.039116  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:13:42.082837  487755 cri.go:89] found id: ""
	I0819 20:13:42.082870  487755 logs.go:276] 0 containers: []
	W0819 20:13:42.082881  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:13:42.082889  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:13:42.082959  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:13:42.143920  487755 cri.go:89] found id: ""
	I0819 20:13:42.143955  487755 logs.go:276] 0 containers: []
	W0819 20:13:42.143967  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:13:42.143975  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:13:42.144058  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:13:42.187542  487755 cri.go:89] found id: ""
	I0819 20:13:42.187578  487755 logs.go:276] 0 containers: []
	W0819 20:13:42.187591  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:13:42.187599  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:13:42.187679  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:13:42.223076  487755 cri.go:89] found id: ""
	I0819 20:13:42.223109  487755 logs.go:276] 0 containers: []
	W0819 20:13:42.223118  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:13:42.223125  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:13:42.223185  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:13:42.262344  487755 cri.go:89] found id: ""
	I0819 20:13:42.262383  487755 logs.go:276] 0 containers: []
	W0819 20:13:42.262395  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:13:42.262407  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:13:42.262477  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:13:42.300642  487755 cri.go:89] found id: ""
	I0819 20:13:42.300670  487755 logs.go:276] 0 containers: []
	W0819 20:13:42.300680  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:13:42.300690  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:13:42.300774  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:13:42.356105  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:13:42.356149  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:13:42.369992  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:13:42.370033  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:13:42.435904  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:13:42.435927  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:13:42.435944  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:13:42.514025  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:13:42.514067  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:41.791041  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:43.792034  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:41.728891  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:43.728959  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:45.054057  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:13:45.066721  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:13:45.066824  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:13:45.101505  487755 cri.go:89] found id: ""
	I0819 20:13:45.101535  487755 logs.go:276] 0 containers: []
	W0819 20:13:45.101544  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:13:45.101551  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:13:45.101606  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:13:45.136199  487755 cri.go:89] found id: ""
	I0819 20:13:45.136231  487755 logs.go:276] 0 containers: []
	W0819 20:13:45.136251  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:13:45.136258  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:13:45.136327  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:13:45.170343  487755 cri.go:89] found id: ""
	I0819 20:13:45.170371  487755 logs.go:276] 0 containers: []
	W0819 20:13:45.170381  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:13:45.170389  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:13:45.170462  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:13:45.205148  487755 cri.go:89] found id: ""
	I0819 20:13:45.205181  487755 logs.go:276] 0 containers: []
	W0819 20:13:45.205192  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:13:45.205208  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:13:45.205306  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:13:45.239775  487755 cri.go:89] found id: ""
	I0819 20:13:45.239803  487755 logs.go:276] 0 containers: []
	W0819 20:13:45.239815  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:13:45.239823  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:13:45.239887  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:13:45.275285  487755 cri.go:89] found id: ""
	I0819 20:13:45.275311  487755 logs.go:276] 0 containers: []
	W0819 20:13:45.275319  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:13:45.275334  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:13:45.275401  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:13:45.310612  487755 cri.go:89] found id: ""
	I0819 20:13:45.310645  487755 logs.go:276] 0 containers: []
	W0819 20:13:45.310654  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:13:45.310660  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:13:45.310725  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:13:45.345905  487755 cri.go:89] found id: ""
	I0819 20:13:45.345935  487755 logs.go:276] 0 containers: []
	W0819 20:13:45.345947  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:13:45.345961  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:13:45.345979  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:13:45.359626  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:13:45.359656  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:13:45.432068  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:13:45.432098  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:13:45.432125  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:13:45.508829  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:13:45.508883  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:45.549646  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:13:45.549679  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:13:46.295568  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:48.791511  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:46.229140  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:48.230348  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:48.098759  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:13:48.111837  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:13:48.111911  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:13:48.144843  487755 cri.go:89] found id: ""
	I0819 20:13:48.144877  487755 logs.go:276] 0 containers: []
	W0819 20:13:48.144887  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:13:48.144893  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:13:48.144954  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:13:48.177399  487755 cri.go:89] found id: ""
	I0819 20:13:48.177434  487755 logs.go:276] 0 containers: []
	W0819 20:13:48.177444  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:13:48.177450  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:13:48.177506  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:13:48.215177  487755 cri.go:89] found id: ""
	I0819 20:13:48.215207  487755 logs.go:276] 0 containers: []
	W0819 20:13:48.215216  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:13:48.215225  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:13:48.215278  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:13:48.248863  487755 cri.go:89] found id: ""
	I0819 20:13:48.248897  487755 logs.go:276] 0 containers: []
	W0819 20:13:48.248905  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:13:48.248913  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:13:48.248970  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:13:48.282427  487755 cri.go:89] found id: ""
	I0819 20:13:48.282470  487755 logs.go:276] 0 containers: []
	W0819 20:13:48.282481  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:13:48.282488  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:13:48.282563  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:13:48.319767  487755 cri.go:89] found id: ""
	I0819 20:13:48.319796  487755 logs.go:276] 0 containers: []
	W0819 20:13:48.319806  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:13:48.319811  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:13:48.319880  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:13:48.353868  487755 cri.go:89] found id: ""
	I0819 20:13:48.353905  487755 logs.go:276] 0 containers: []
	W0819 20:13:48.353914  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:13:48.353921  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:13:48.353987  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:13:48.388144  487755 cri.go:89] found id: ""
	I0819 20:13:48.388173  487755 logs.go:276] 0 containers: []
	W0819 20:13:48.388182  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:13:48.388193  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:13:48.388209  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:13:48.439596  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:13:48.439642  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:13:48.454858  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:13:48.454888  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:13:48.527711  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:13:48.527737  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:13:48.527753  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:13:48.604825  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:13:48.604884  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:51.143023  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:13:51.155875  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:13:51.155962  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:13:51.189244  487755 cri.go:89] found id: ""
	I0819 20:13:51.189274  487755 logs.go:276] 0 containers: []
	W0819 20:13:51.189286  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:13:51.189295  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:13:51.189357  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:13:51.222347  487755 cri.go:89] found id: ""
	I0819 20:13:51.222374  487755 logs.go:276] 0 containers: []
	W0819 20:13:51.222384  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:13:51.222393  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:13:51.222458  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:13:51.257011  487755 cri.go:89] found id: ""
	I0819 20:13:51.257040  487755 logs.go:276] 0 containers: []
	W0819 20:13:51.257049  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:13:51.257056  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:13:51.257120  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:13:51.294005  487755 cri.go:89] found id: ""
	I0819 20:13:51.294033  487755 logs.go:276] 0 containers: []
	W0819 20:13:51.294042  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:13:51.294048  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:13:51.294100  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:13:51.333677  487755 cri.go:89] found id: ""
	I0819 20:13:51.333707  487755 logs.go:276] 0 containers: []
	W0819 20:13:51.333718  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:13:51.333727  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:13:51.333795  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:13:51.366971  487755 cri.go:89] found id: ""
	I0819 20:13:51.367003  487755 logs.go:276] 0 containers: []
	W0819 20:13:51.367013  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:13:51.367020  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:13:51.367076  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:13:51.400660  487755 cri.go:89] found id: ""
	I0819 20:13:51.400692  487755 logs.go:276] 0 containers: []
	W0819 20:13:51.400700  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:13:51.400707  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:13:51.400775  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:13:51.434102  487755 cri.go:89] found id: ""
	I0819 20:13:51.434132  487755 logs.go:276] 0 containers: []
	W0819 20:13:51.434143  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:13:51.434156  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:13:51.434175  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:13:51.506254  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:13:51.506282  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:13:51.506299  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:13:51.581589  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:13:51.581634  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:51.623055  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:13:51.623087  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:13:51.675069  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:13:51.675112  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:13:50.792633  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:53.292086  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:50.727920  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:52.728924  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:54.729716  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:54.188985  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:13:54.203170  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:13:54.203242  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:13:54.241113  487755 cri.go:89] found id: ""
	I0819 20:13:54.241164  487755 logs.go:276] 0 containers: []
	W0819 20:13:54.241176  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:13:54.241182  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:13:54.241237  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:13:54.275353  487755 cri.go:89] found id: ""
	I0819 20:13:54.275381  487755 logs.go:276] 0 containers: []
	W0819 20:13:54.275392  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:13:54.275399  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:13:54.275457  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:13:54.310174  487755 cri.go:89] found id: ""
	I0819 20:13:54.310206  487755 logs.go:276] 0 containers: []
	W0819 20:13:54.310218  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:13:54.310226  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:13:54.310295  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:13:54.347352  487755 cri.go:89] found id: ""
	I0819 20:13:54.347386  487755 logs.go:276] 0 containers: []
	W0819 20:13:54.347398  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:13:54.347407  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:13:54.347475  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:13:54.380830  487755 cri.go:89] found id: ""
	I0819 20:13:54.380862  487755 logs.go:276] 0 containers: []
	W0819 20:13:54.380871  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:13:54.380877  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:13:54.380933  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:13:54.413279  487755 cri.go:89] found id: ""
	I0819 20:13:54.413306  487755 logs.go:276] 0 containers: []
	W0819 20:13:54.413315  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:13:54.413322  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:13:54.413400  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:13:54.451181  487755 cri.go:89] found id: ""
	I0819 20:13:54.451209  487755 logs.go:276] 0 containers: []
	W0819 20:13:54.451220  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:13:54.451225  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:13:54.451285  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:13:54.482895  487755 cri.go:89] found id: ""
	I0819 20:13:54.482921  487755 logs.go:276] 0 containers: []
	W0819 20:13:54.482930  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:13:54.482941  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:13:54.482954  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:13:54.532566  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:13:54.532609  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:13:54.546183  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:13:54.546213  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:13:54.612155  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:13:54.612185  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:13:54.612203  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:13:54.687108  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:13:54.687151  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:57.227566  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:13:57.241606  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:13:57.241685  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:13:57.277741  487755 cri.go:89] found id: ""
	I0819 20:13:57.277779  487755 logs.go:276] 0 containers: []
	W0819 20:13:57.277792  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:13:57.277800  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:13:57.277869  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:13:57.313327  487755 cri.go:89] found id: ""
	I0819 20:13:57.313365  487755 logs.go:276] 0 containers: []
	W0819 20:13:57.313377  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:13:57.313385  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:13:57.313456  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:13:57.347893  487755 cri.go:89] found id: ""
	I0819 20:13:57.347927  487755 logs.go:276] 0 containers: []
	W0819 20:13:57.347935  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:13:57.347941  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:13:57.347994  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:13:57.380891  487755 cri.go:89] found id: ""
	I0819 20:13:57.380920  487755 logs.go:276] 0 containers: []
	W0819 20:13:57.380931  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:13:57.380939  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:13:57.381010  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:13:57.413959  487755 cri.go:89] found id: ""
	I0819 20:13:57.413986  487755 logs.go:276] 0 containers: []
	W0819 20:13:57.413994  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:13:57.414014  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:13:57.414076  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:13:57.448973  487755 cri.go:89] found id: ""
	I0819 20:13:57.449011  487755 logs.go:276] 0 containers: []
	W0819 20:13:57.449019  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:13:57.449026  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:13:57.449080  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:13:57.483005  487755 cri.go:89] found id: ""
	I0819 20:13:57.483041  487755 logs.go:276] 0 containers: []
	W0819 20:13:57.483054  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:13:57.483061  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:13:57.483138  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:13:57.517509  487755 cri.go:89] found id: ""
	I0819 20:13:57.517541  487755 logs.go:276] 0 containers: []
	W0819 20:13:57.517553  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:13:57.517565  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:13:57.517581  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:13:57.568219  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:13:57.568260  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:13:57.581707  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:13:57.581737  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:13:57.656858  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:13:57.656885  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:13:57.656899  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:13:57.736296  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:13:57.736334  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:13:55.791639  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:57.792555  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:57.228728  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:13:59.229946  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:00.280944  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:00.294877  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:00.294942  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:00.331611  487755 cri.go:89] found id: ""
	I0819 20:14:00.331639  487755 logs.go:276] 0 containers: []
	W0819 20:14:00.331648  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:00.331655  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:00.331715  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:00.365413  487755 cri.go:89] found id: ""
	I0819 20:14:00.365440  487755 logs.go:276] 0 containers: []
	W0819 20:14:00.365447  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:00.365453  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:00.365512  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:00.398634  487755 cri.go:89] found id: ""
	I0819 20:14:00.398669  487755 logs.go:276] 0 containers: []
	W0819 20:14:00.398681  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:00.398689  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:00.398766  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:00.431725  487755 cri.go:89] found id: ""
	I0819 20:14:00.431750  487755 logs.go:276] 0 containers: []
	W0819 20:14:00.431759  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:00.431765  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:00.431816  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:00.466527  487755 cri.go:89] found id: ""
	I0819 20:14:00.466567  487755 logs.go:276] 0 containers: []
	W0819 20:14:00.466589  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:00.466597  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:00.466672  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:00.500403  487755 cri.go:89] found id: ""
	I0819 20:14:00.500434  487755 logs.go:276] 0 containers: []
	W0819 20:14:00.500446  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:00.500455  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:00.500528  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:00.533442  487755 cri.go:89] found id: ""
	I0819 20:14:00.533469  487755 logs.go:276] 0 containers: []
	W0819 20:14:00.533485  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:00.533491  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:00.533564  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:00.566973  487755 cri.go:89] found id: ""
	I0819 20:14:00.566999  487755 logs.go:276] 0 containers: []
	W0819 20:14:00.567008  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:00.567017  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:00.567031  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:00.617159  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:00.617204  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:00.630376  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:00.630408  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:00.698720  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:00.698766  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:00.698784  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:00.784254  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:00.784311  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:14:00.292085  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:02.292402  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:04.791278  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:01.728497  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:04.229334  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:03.328470  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:03.341854  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:03.341937  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:03.376020  487755 cri.go:89] found id: ""
	I0819 20:14:03.376054  487755 logs.go:276] 0 containers: []
	W0819 20:14:03.376065  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:03.376072  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:03.376144  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:03.411301  487755 cri.go:89] found id: ""
	I0819 20:14:03.411331  487755 logs.go:276] 0 containers: []
	W0819 20:14:03.411341  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:03.411347  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:03.411400  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:03.445530  487755 cri.go:89] found id: ""
	I0819 20:14:03.445558  487755 logs.go:276] 0 containers: []
	W0819 20:14:03.445566  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:03.445572  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:03.445629  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:03.479566  487755 cri.go:89] found id: ""
	I0819 20:14:03.479608  487755 logs.go:276] 0 containers: []
	W0819 20:14:03.479617  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:03.479623  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:03.479684  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:03.512435  487755 cri.go:89] found id: ""
	I0819 20:14:03.512469  487755 logs.go:276] 0 containers: []
	W0819 20:14:03.512478  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:03.512484  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:03.512549  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:03.548779  487755 cri.go:89] found id: ""
	I0819 20:14:03.548809  487755 logs.go:276] 0 containers: []
	W0819 20:14:03.548817  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:03.548823  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:03.548876  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:03.581402  487755 cri.go:89] found id: ""
	I0819 20:14:03.581434  487755 logs.go:276] 0 containers: []
	W0819 20:14:03.581446  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:03.581453  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:03.581524  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:03.614074  487755 cri.go:89] found id: ""
	I0819 20:14:03.614110  487755 logs.go:276] 0 containers: []
	W0819 20:14:03.614118  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:03.614129  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:03.614143  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:03.662440  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:03.662482  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:03.675862  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:03.675891  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:03.751451  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:03.751473  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:03.751486  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:03.832610  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:03.832653  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:14:06.376230  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:06.388645  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:06.388728  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:06.419887  487755 cri.go:89] found id: ""
	I0819 20:14:06.419921  487755 logs.go:276] 0 containers: []
	W0819 20:14:06.419929  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:06.419935  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:06.419993  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:06.452714  487755 cri.go:89] found id: ""
	I0819 20:14:06.452748  487755 logs.go:276] 0 containers: []
	W0819 20:14:06.452769  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:06.452778  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:06.452852  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:06.491256  487755 cri.go:89] found id: ""
	I0819 20:14:06.491294  487755 logs.go:276] 0 containers: []
	W0819 20:14:06.491303  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:06.491310  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:06.491365  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:06.526694  487755 cri.go:89] found id: ""
	I0819 20:14:06.526722  487755 logs.go:276] 0 containers: []
	W0819 20:14:06.526730  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:06.526736  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:06.526794  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:06.560808  487755 cri.go:89] found id: ""
	I0819 20:14:06.560834  487755 logs.go:276] 0 containers: []
	W0819 20:14:06.560847  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:06.560853  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:06.560906  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:06.594508  487755 cri.go:89] found id: ""
	I0819 20:14:06.594540  487755 logs.go:276] 0 containers: []
	W0819 20:14:06.594552  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:06.594559  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:06.594641  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:06.626796  487755 cri.go:89] found id: ""
	I0819 20:14:06.626824  487755 logs.go:276] 0 containers: []
	W0819 20:14:06.626835  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:06.626842  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:06.626915  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:06.660560  487755 cri.go:89] found id: ""
	I0819 20:14:06.660602  487755 logs.go:276] 0 containers: []
	W0819 20:14:06.660614  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:06.660627  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:06.660643  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:06.724155  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:06.724182  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:06.724199  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:06.806139  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:06.806182  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:14:06.846364  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:06.846396  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:06.899396  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:06.899438  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:06.792518  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:09.291036  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:06.230006  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:08.729868  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:09.413487  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:09.428023  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:09.428089  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:09.466305  487755 cri.go:89] found id: ""
	I0819 20:14:09.466337  487755 logs.go:276] 0 containers: []
	W0819 20:14:09.466346  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:09.466352  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:09.466427  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:09.499182  487755 cri.go:89] found id: ""
	I0819 20:14:09.499211  487755 logs.go:276] 0 containers: []
	W0819 20:14:09.499219  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:09.499226  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:09.499279  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:09.536772  487755 cri.go:89] found id: ""
	I0819 20:14:09.536802  487755 logs.go:276] 0 containers: []
	W0819 20:14:09.536810  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:09.536817  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:09.536869  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:09.572905  487755 cri.go:89] found id: ""
	I0819 20:14:09.572944  487755 logs.go:276] 0 containers: []
	W0819 20:14:09.572954  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:09.572961  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:09.573019  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:09.605878  487755 cri.go:89] found id: ""
	I0819 20:14:09.605904  487755 logs.go:276] 0 containers: []
	W0819 20:14:09.605913  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:09.605919  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:09.605971  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:09.640960  487755 cri.go:89] found id: ""
	I0819 20:14:09.640989  487755 logs.go:276] 0 containers: []
	W0819 20:14:09.640997  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:09.641003  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:09.641063  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:09.676093  487755 cri.go:89] found id: ""
	I0819 20:14:09.676125  487755 logs.go:276] 0 containers: []
	W0819 20:14:09.676136  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:09.676145  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:09.676218  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:09.709428  487755 cri.go:89] found id: ""
	I0819 20:14:09.709458  487755 logs.go:276] 0 containers: []
	W0819 20:14:09.709469  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:09.709481  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:09.709497  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:09.763463  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:09.763502  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:09.778332  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:09.778379  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:09.849888  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:09.849913  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:09.849929  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:09.924330  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:09.924376  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:14:12.465788  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:12.478638  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:12.478703  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:12.511830  487755 cri.go:89] found id: ""
	I0819 20:14:12.511865  487755 logs.go:276] 0 containers: []
	W0819 20:14:12.511878  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:12.511884  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:12.511950  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:12.552499  487755 cri.go:89] found id: ""
	I0819 20:14:12.552541  487755 logs.go:276] 0 containers: []
	W0819 20:14:12.552551  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:12.552561  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:12.552636  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:12.587982  487755 cri.go:89] found id: ""
	I0819 20:14:12.588012  487755 logs.go:276] 0 containers: []
	W0819 20:14:12.588022  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:12.588029  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:12.588097  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:12.621999  487755 cri.go:89] found id: ""
	I0819 20:14:12.622033  487755 logs.go:276] 0 containers: []
	W0819 20:14:12.622045  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:12.622053  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:12.622124  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:12.657580  487755 cri.go:89] found id: ""
	I0819 20:14:12.657612  487755 logs.go:276] 0 containers: []
	W0819 20:14:12.657621  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:12.657627  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:12.657691  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:12.692295  487755 cri.go:89] found id: ""
	I0819 20:14:12.692330  487755 logs.go:276] 0 containers: []
	W0819 20:14:12.692342  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:12.692351  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:12.692424  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:12.724868  487755 cri.go:89] found id: ""
	I0819 20:14:12.724900  487755 logs.go:276] 0 containers: []
	W0819 20:14:12.724913  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:12.724920  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:12.724994  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:12.760647  487755 cri.go:89] found id: ""
	I0819 20:14:12.760674  487755 logs.go:276] 0 containers: []
	W0819 20:14:12.760685  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:12.760697  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:12.760715  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:12.811503  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:12.811548  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:12.825966  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:12.826002  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:12.904629  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:12.904657  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:12.904672  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:12.572829  486208 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.000431396s
	I0819 20:14:12.572891  486208 kubeadm.go:310] 
	I0819 20:14:12.572946  486208 kubeadm.go:310] Unfortunately, an error has occurred:
	I0819 20:14:12.573423  486208 kubeadm.go:310] 	context deadline exceeded
	I0819 20:14:12.573454  486208 kubeadm.go:310] 
	I0819 20:14:12.573499  486208 kubeadm.go:310] This error is likely caused by:
	I0819 20:14:12.573569  486208 kubeadm.go:310] 	- The kubelet is not running
	I0819 20:14:12.573709  486208 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 20:14:12.573722  486208 kubeadm.go:310] 
	I0819 20:14:12.573891  486208 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 20:14:12.573963  486208 kubeadm.go:310] 	- 'systemctl status kubelet'
	I0819 20:14:12.574009  486208 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I0819 20:14:12.574018  486208 kubeadm.go:310] 
	I0819 20:14:12.574182  486208 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 20:14:12.574306  486208 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 20:14:12.574428  486208 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0819 20:14:12.574715  486208 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 20:14:12.574852  486208 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0819 20:14:12.574962  486208 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I0819 20:14:12.576739  486208 kubeadm.go:310] W0819 20:10:10.898706    9639 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 20:14:12.577173  486208 kubeadm.go:310] W0819 20:10:10.899894    9639 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 20:14:12.577345  486208 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 20:14:12.577473  486208 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I0819 20:14:12.577591  486208 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 20:14:12.577724  486208 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 520.614511ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000431396s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W0819 20:10:10.898706    9639 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W0819 20:10:10.899894    9639 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 20:14:12.577784  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 20:14:13.387420  486208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:14:13.402273  486208 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 20:14:13.412853  486208 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 20:14:13.412882  486208 kubeadm.go:157] found existing configuration files:
	
	I0819 20:14:13.412942  486208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 20:14:13.422770  486208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 20:14:13.422855  486208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 20:14:13.433263  486208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 20:14:13.443024  486208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 20:14:13.443099  486208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 20:14:13.453098  486208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 20:14:13.462561  486208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 20:14:13.462632  486208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 20:14:13.472492  486208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 20:14:13.482095  486208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 20:14:13.482159  486208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 20:14:13.492491  486208 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 20:14:13.537018  486208 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 20:14:13.537234  486208 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 20:14:13.628735  486208 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 20:14:13.628910  486208 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 20:14:13.629056  486208 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 20:14:13.641943  486208 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 20:14:13.644567  486208 out.go:235]   - Generating certificates and keys ...
	I0819 20:14:13.644660  486208 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 20:14:13.644774  486208 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 20:14:13.644884  486208 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 20:14:13.644941  486208 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 20:14:13.645002  486208 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 20:14:13.645053  486208 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 20:14:13.645106  486208 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 20:14:13.645178  486208 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 20:14:13.645246  486208 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 20:14:13.645310  486208 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 20:14:13.645343  486208 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 20:14:13.645397  486208 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 20:14:13.792866  486208 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 20:14:14.168735  486208 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 20:14:14.408336  486208 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 20:14:14.650686  486208 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 20:14:14.852108  486208 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 20:14:14.852836  486208 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 20:14:14.855461  486208 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 20:14:11.291421  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:13.292270  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:11.228828  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:13.229367  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:14.856915  486208 out.go:235]   - Booting up control plane ...
	I0819 20:14:14.857056  486208 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 20:14:14.857229  486208 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 20:14:14.859823  486208 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 20:14:14.878014  486208 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 20:14:14.884394  486208 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 20:14:14.884494  486208 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 20:14:15.015401  486208 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 20:14:15.015562  486208 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 20:14:15.517394  486208 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.942981ms
	I0819 20:14:15.517525  486208 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 20:14:12.987334  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:12.987382  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:14:15.532137  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:15.544903  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:15.544981  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:15.579791  487755 cri.go:89] found id: ""
	I0819 20:14:15.579818  487755 logs.go:276] 0 containers: []
	W0819 20:14:15.579828  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:15.579834  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:15.579886  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:15.615569  487755 cri.go:89] found id: ""
	I0819 20:14:15.615597  487755 logs.go:276] 0 containers: []
	W0819 20:14:15.615610  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:15.615619  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:15.615678  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:15.648460  487755 cri.go:89] found id: ""
	I0819 20:14:15.648490  487755 logs.go:276] 0 containers: []
	W0819 20:14:15.648502  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:15.648517  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:15.648593  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:15.683365  487755 cri.go:89] found id: ""
	I0819 20:14:15.683396  487755 logs.go:276] 0 containers: []
	W0819 20:14:15.683404  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:15.683412  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:15.683476  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:15.717077  487755 cri.go:89] found id: ""
	I0819 20:14:15.717106  487755 logs.go:276] 0 containers: []
	W0819 20:14:15.717115  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:15.717120  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:15.717215  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:15.753084  487755 cri.go:89] found id: ""
	I0819 20:14:15.753112  487755 logs.go:276] 0 containers: []
	W0819 20:14:15.753120  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:15.753128  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:15.753211  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:15.789007  487755 cri.go:89] found id: ""
	I0819 20:14:15.789040  487755 logs.go:276] 0 containers: []
	W0819 20:14:15.789051  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:15.789059  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:15.789125  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:15.829672  487755 cri.go:89] found id: ""
	I0819 20:14:15.829711  487755 logs.go:276] 0 containers: []
	W0819 20:14:15.829723  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:15.829737  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:15.829752  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:15.880715  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:15.880758  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:15.894514  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:15.894548  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:15.994144  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:15.994172  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:15.994190  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:16.078107  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:16.078152  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:14:15.792550  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:17.793050  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:15.229981  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:17.728015  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:18.626470  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:18.638888  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:18.638962  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:18.670591  487755 cri.go:89] found id: ""
	I0819 20:14:18.670626  487755 logs.go:276] 0 containers: []
	W0819 20:14:18.670640  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:18.670649  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:18.670714  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:18.705354  487755 cri.go:89] found id: ""
	I0819 20:14:18.705396  487755 logs.go:276] 0 containers: []
	W0819 20:14:18.705408  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:18.705418  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:18.705488  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:18.740297  487755 cri.go:89] found id: ""
	I0819 20:14:18.740325  487755 logs.go:276] 0 containers: []
	W0819 20:14:18.740333  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:18.740339  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:18.740397  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:18.774185  487755 cri.go:89] found id: ""
	I0819 20:14:18.774214  487755 logs.go:276] 0 containers: []
	W0819 20:14:18.774224  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:18.774233  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:18.774303  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:18.808575  487755 cri.go:89] found id: ""
	I0819 20:14:18.808605  487755 logs.go:276] 0 containers: []
	W0819 20:14:18.808614  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:18.808620  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:18.808687  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:18.842132  487755 cri.go:89] found id: ""
	I0819 20:14:18.842166  487755 logs.go:276] 0 containers: []
	W0819 20:14:18.842179  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:18.842187  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:18.842257  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:18.876526  487755 cri.go:89] found id: ""
	I0819 20:14:18.876555  487755 logs.go:276] 0 containers: []
	W0819 20:14:18.876564  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:18.876570  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:18.876623  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:18.910104  487755 cri.go:89] found id: ""
	I0819 20:14:18.910136  487755 logs.go:276] 0 containers: []
	W0819 20:14:18.910148  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:18.910160  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:18.910176  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:18.977501  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:18.977528  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:18.977544  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:19.058187  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:19.058234  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:14:19.094812  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:19.094846  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:19.145725  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:19.145767  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:21.661648  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:21.674625  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:21.674696  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:21.719797  487755 cri.go:89] found id: ""
	I0819 20:14:21.719836  487755 logs.go:276] 0 containers: []
	W0819 20:14:21.719849  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:21.719857  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:21.719927  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:21.756861  487755 cri.go:89] found id: ""
	I0819 20:14:21.756891  487755 logs.go:276] 0 containers: []
	W0819 20:14:21.756900  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:21.756909  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:21.756983  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:21.791784  487755 cri.go:89] found id: ""
	I0819 20:14:21.791816  487755 logs.go:276] 0 containers: []
	W0819 20:14:21.791826  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:21.791834  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:21.791907  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:21.825526  487755 cri.go:89] found id: ""
	I0819 20:14:21.825555  487755 logs.go:276] 0 containers: []
	W0819 20:14:21.825563  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:21.825570  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:21.825629  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:21.858817  487755 cri.go:89] found id: ""
	I0819 20:14:21.858851  487755 logs.go:276] 0 containers: []
	W0819 20:14:21.858863  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:21.858872  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:21.858945  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:21.894033  487755 cri.go:89] found id: ""
	I0819 20:14:21.894061  487755 logs.go:276] 0 containers: []
	W0819 20:14:21.894072  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:21.894080  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:21.894147  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:21.934433  487755 cri.go:89] found id: ""
	I0819 20:14:21.934469  487755 logs.go:276] 0 containers: []
	W0819 20:14:21.934482  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:21.934491  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:21.934574  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:21.966291  487755 cri.go:89] found id: ""
	I0819 20:14:21.966331  487755 logs.go:276] 0 containers: []
	W0819 20:14:21.966344  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:21.966357  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:21.966374  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:22.017880  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:22.017928  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:22.033342  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:22.033386  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:22.108443  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:22.108471  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:22.108489  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:22.188599  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:22.188641  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:14:20.291841  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:22.791376  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:24.792308  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:20.229465  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:22.729039  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:24.730373  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:24.743322  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:24.743393  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:24.778519  487755 cri.go:89] found id: ""
	I0819 20:14:24.778553  487755 logs.go:276] 0 containers: []
	W0819 20:14:24.778562  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:24.778569  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:24.778622  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:24.812369  487755 cri.go:89] found id: ""
	I0819 20:14:24.812402  487755 logs.go:276] 0 containers: []
	W0819 20:14:24.812413  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:24.812421  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:24.812482  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:24.847134  487755 cri.go:89] found id: ""
	I0819 20:14:24.847171  487755 logs.go:276] 0 containers: []
	W0819 20:14:24.847182  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:24.847188  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:24.847253  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:24.881355  487755 cri.go:89] found id: ""
	I0819 20:14:24.881390  487755 logs.go:276] 0 containers: []
	W0819 20:14:24.881403  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:24.881412  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:24.881483  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:24.915051  487755 cri.go:89] found id: ""
	I0819 20:14:24.915081  487755 logs.go:276] 0 containers: []
	W0819 20:14:24.915093  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:24.915101  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:24.915170  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:24.949216  487755 cri.go:89] found id: ""
	I0819 20:14:24.949242  487755 logs.go:276] 0 containers: []
	W0819 20:14:24.949250  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:24.949256  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:24.949319  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:24.987531  487755 cri.go:89] found id: ""
	I0819 20:14:24.987565  487755 logs.go:276] 0 containers: []
	W0819 20:14:24.987576  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:24.987585  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:24.987647  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:25.022034  487755 cri.go:89] found id: ""
	I0819 20:14:25.022065  487755 logs.go:276] 0 containers: []
	W0819 20:14:25.022074  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:25.022083  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:25.022096  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:25.072799  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:25.072847  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:25.086544  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:25.086586  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:25.152862  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:25.152887  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:25.152900  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:25.233708  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:25.233747  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:14:27.773162  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:27.786620  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:27.786707  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:27.824217  487755 cri.go:89] found id: ""
	I0819 20:14:27.824250  487755 logs.go:276] 0 containers: []
	W0819 20:14:27.824270  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:27.824278  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:27.824343  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:27.859780  487755 cri.go:89] found id: ""
	I0819 20:14:27.859813  487755 logs.go:276] 0 containers: []
	W0819 20:14:27.859826  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:27.859835  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:27.859899  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:27.894725  487755 cri.go:89] found id: ""
	I0819 20:14:27.894757  487755 logs.go:276] 0 containers: []
	W0819 20:14:27.894771  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:27.894779  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:27.894850  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:27.930104  487755 cri.go:89] found id: ""
	I0819 20:14:27.930135  487755 logs.go:276] 0 containers: []
	W0819 20:14:27.930146  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:27.930153  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:27.930215  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:27.292364  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:29.791625  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:25.228928  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:27.229320  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:29.727944  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:27.964786  487755 cri.go:89] found id: ""
	I0819 20:14:27.964817  487755 logs.go:276] 0 containers: []
	W0819 20:14:27.964828  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:27.964837  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:27.964906  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:28.003985  487755 cri.go:89] found id: ""
	I0819 20:14:28.004012  487755 logs.go:276] 0 containers: []
	W0819 20:14:28.004026  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:28.004035  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:28.004098  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:28.040297  487755 cri.go:89] found id: ""
	I0819 20:14:28.040337  487755 logs.go:276] 0 containers: []
	W0819 20:14:28.040349  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:28.040357  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:28.040426  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:28.079378  487755 cri.go:89] found id: ""
	I0819 20:14:28.079414  487755 logs.go:276] 0 containers: []
	W0819 20:14:28.079425  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:28.079440  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:28.079454  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:28.130107  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:28.130146  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:28.144356  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:28.144389  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:28.209699  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:28.209726  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:28.209744  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:28.293610  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:28.293649  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:14:30.839319  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:30.852548  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:30.852617  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:30.890411  487755 cri.go:89] found id: ""
	I0819 20:14:30.890451  487755 logs.go:276] 0 containers: []
	W0819 20:14:30.890464  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:30.890472  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:30.890542  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:30.928073  487755 cri.go:89] found id: ""
	I0819 20:14:30.928109  487755 logs.go:276] 0 containers: []
	W0819 20:14:30.928128  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:30.928136  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:30.928205  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:30.962157  487755 cri.go:89] found id: ""
	I0819 20:14:30.962185  487755 logs.go:276] 0 containers: []
	W0819 20:14:30.962194  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:30.962200  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:30.962254  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:30.999017  487755 cri.go:89] found id: ""
	I0819 20:14:30.999045  487755 logs.go:276] 0 containers: []
	W0819 20:14:30.999057  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:30.999065  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:30.999132  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:31.032743  487755 cri.go:89] found id: ""
	I0819 20:14:31.032782  487755 logs.go:276] 0 containers: []
	W0819 20:14:31.032790  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:31.032796  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:31.032862  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:31.071621  487755 cri.go:89] found id: ""
	I0819 20:14:31.071654  487755 logs.go:276] 0 containers: []
	W0819 20:14:31.071662  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:31.071668  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:31.071737  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:31.108502  487755 cri.go:89] found id: ""
	I0819 20:14:31.108538  487755 logs.go:276] 0 containers: []
	W0819 20:14:31.108550  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:31.108559  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:31.108628  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:31.143730  487755 cri.go:89] found id: ""
	I0819 20:14:31.143761  487755 logs.go:276] 0 containers: []
	W0819 20:14:31.143773  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:31.143791  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:31.143809  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:31.219428  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:31.219473  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:14:31.263755  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:31.263790  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:31.316485  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:31.316540  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:31.330384  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:31.330426  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:31.401124  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:31.792144  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:34.291061  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:31.729092  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:34.228325  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:33.902034  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:33.914495  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:33.914577  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:33.948594  487755 cri.go:89] found id: ""
	I0819 20:14:33.948624  487755 logs.go:276] 0 containers: []
	W0819 20:14:33.948633  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:33.948639  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:33.948702  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:33.984744  487755 cri.go:89] found id: ""
	I0819 20:14:33.984777  487755 logs.go:276] 0 containers: []
	W0819 20:14:33.984789  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:33.984798  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:33.984860  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:34.019555  487755 cri.go:89] found id: ""
	I0819 20:14:34.019592  487755 logs.go:276] 0 containers: []
	W0819 20:14:34.019601  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:34.019609  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:34.019666  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:34.054187  487755 cri.go:89] found id: ""
	I0819 20:14:34.054225  487755 logs.go:276] 0 containers: []
	W0819 20:14:34.054237  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:34.054245  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:34.054319  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:34.091009  487755 cri.go:89] found id: ""
	I0819 20:14:34.091047  487755 logs.go:276] 0 containers: []
	W0819 20:14:34.091059  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:34.091067  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:34.091134  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:34.139575  487755 cri.go:89] found id: ""
	I0819 20:14:34.139620  487755 logs.go:276] 0 containers: []
	W0819 20:14:34.139635  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:34.139644  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:34.139713  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:34.175108  487755 cri.go:89] found id: ""
	I0819 20:14:34.175144  487755 logs.go:276] 0 containers: []
	W0819 20:14:34.175153  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:34.175161  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:34.175253  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:34.220454  487755 cri.go:89] found id: ""
	I0819 20:14:34.220491  487755 logs.go:276] 0 containers: []
	W0819 20:14:34.220505  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:34.220518  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:34.220539  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:34.270399  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:34.270442  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:34.284097  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:34.284131  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:34.354307  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:34.354336  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:34.354348  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:34.439318  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:34.439362  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:14:36.977339  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:36.990063  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:36.990144  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:37.027857  487755 cri.go:89] found id: ""
	I0819 20:14:37.027888  487755 logs.go:276] 0 containers: []
	W0819 20:14:37.027897  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:37.027903  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:37.027968  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:37.062738  487755 cri.go:89] found id: ""
	I0819 20:14:37.062768  487755 logs.go:276] 0 containers: []
	W0819 20:14:37.062779  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:37.062787  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:37.062858  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:37.101564  487755 cri.go:89] found id: ""
	I0819 20:14:37.101595  487755 logs.go:276] 0 containers: []
	W0819 20:14:37.101604  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:37.101610  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:37.101673  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:37.136963  487755 cri.go:89] found id: ""
	I0819 20:14:37.136995  487755 logs.go:276] 0 containers: []
	W0819 20:14:37.137007  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:37.137015  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:37.137087  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:37.171083  487755 cri.go:89] found id: ""
	I0819 20:14:37.171112  487755 logs.go:276] 0 containers: []
	W0819 20:14:37.171121  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:37.171127  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:37.171181  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:37.205929  487755 cri.go:89] found id: ""
	I0819 20:14:37.205963  487755 logs.go:276] 0 containers: []
	W0819 20:14:37.205975  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:37.205988  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:37.206058  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:37.240723  487755 cri.go:89] found id: ""
	I0819 20:14:37.240754  487755 logs.go:276] 0 containers: []
	W0819 20:14:37.240766  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:37.240773  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:37.240838  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:37.277470  487755 cri.go:89] found id: ""
	I0819 20:14:37.277523  487755 logs.go:276] 0 containers: []
	W0819 20:14:37.277535  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:37.277548  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:37.277567  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:14:37.315494  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:37.315527  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:37.367813  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:37.367860  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:37.381473  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:37.381508  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:37.453967  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:37.453995  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:37.454012  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:36.291258  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:38.291753  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:36.228979  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:38.729011  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:40.029621  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:40.042447  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:40.042510  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:40.075644  487755 cri.go:89] found id: ""
	I0819 20:14:40.075680  487755 logs.go:276] 0 containers: []
	W0819 20:14:40.075690  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:40.075696  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:40.075758  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:40.109196  487755 cri.go:89] found id: ""
	I0819 20:14:40.109231  487755 logs.go:276] 0 containers: []
	W0819 20:14:40.109240  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:40.109246  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:40.109308  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:40.142807  487755 cri.go:89] found id: ""
	I0819 20:14:40.142834  487755 logs.go:276] 0 containers: []
	W0819 20:14:40.142842  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:40.142848  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:40.142912  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:40.177447  487755 cri.go:89] found id: ""
	I0819 20:14:40.177475  487755 logs.go:276] 0 containers: []
	W0819 20:14:40.177495  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:40.177502  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:40.177566  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:40.210435  487755 cri.go:89] found id: ""
	I0819 20:14:40.210467  487755 logs.go:276] 0 containers: []
	W0819 20:14:40.210476  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:40.210482  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:40.210561  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:40.244066  487755 cri.go:89] found id: ""
	I0819 20:14:40.244092  487755 logs.go:276] 0 containers: []
	W0819 20:14:40.244102  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:40.244108  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:40.244164  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:40.277197  487755 cri.go:89] found id: ""
	I0819 20:14:40.277228  487755 logs.go:276] 0 containers: []
	W0819 20:14:40.277237  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:40.277244  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:40.277301  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:40.310958  487755 cri.go:89] found id: ""
	I0819 20:14:40.310988  487755 logs.go:276] 0 containers: []
	W0819 20:14:40.310998  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:40.311012  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:40.311029  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:40.364918  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:40.364963  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:40.378470  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:40.378500  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:40.444008  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:40.444033  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:40.444049  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:40.526701  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:40.526747  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:14:40.292204  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:42.791023  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:44.791660  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:41.228868  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:43.728864  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:43.063562  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:43.076571  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:43.076648  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:43.116750  487755 cri.go:89] found id: ""
	I0819 20:14:43.116796  487755 logs.go:276] 0 containers: []
	W0819 20:14:43.116806  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:43.116813  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:43.116878  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:43.157196  487755 cri.go:89] found id: ""
	I0819 20:14:43.157228  487755 logs.go:276] 0 containers: []
	W0819 20:14:43.157237  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:43.157244  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:43.157304  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:43.195271  487755 cri.go:89] found id: ""
	I0819 20:14:43.195300  487755 logs.go:276] 0 containers: []
	W0819 20:14:43.195308  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:43.195314  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:43.195382  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:43.229677  487755 cri.go:89] found id: ""
	I0819 20:14:43.229702  487755 logs.go:276] 0 containers: []
	W0819 20:14:43.229710  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:43.229717  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:43.229772  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:43.263683  487755 cri.go:89] found id: ""
	I0819 20:14:43.263719  487755 logs.go:276] 0 containers: []
	W0819 20:14:43.263731  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:43.263746  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:43.263814  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:43.302040  487755 cri.go:89] found id: ""
	I0819 20:14:43.302074  487755 logs.go:276] 0 containers: []
	W0819 20:14:43.302086  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:43.302094  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:43.302162  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:43.340412  487755 cri.go:89] found id: ""
	I0819 20:14:43.340445  487755 logs.go:276] 0 containers: []
	W0819 20:14:43.340457  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:43.340465  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:43.340533  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:43.373059  487755 cri.go:89] found id: ""
	I0819 20:14:43.373093  487755 logs.go:276] 0 containers: []
	W0819 20:14:43.373113  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:43.373146  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:43.373165  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:43.423940  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:43.423986  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:43.437926  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:43.437976  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:43.511436  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:43.511462  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:43.511479  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:43.584655  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:43.584701  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:14:46.123913  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:46.136972  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:46.137041  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:46.171812  487755 cri.go:89] found id: ""
	I0819 20:14:46.171846  487755 logs.go:276] 0 containers: []
	W0819 20:14:46.171855  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:46.171863  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:46.171921  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:46.206840  487755 cri.go:89] found id: ""
	I0819 20:14:46.206874  487755 logs.go:276] 0 containers: []
	W0819 20:14:46.206885  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:46.206893  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:46.206959  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:46.241247  487755 cri.go:89] found id: ""
	I0819 20:14:46.241277  487755 logs.go:276] 0 containers: []
	W0819 20:14:46.241288  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:46.241296  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:46.241368  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:46.275928  487755 cri.go:89] found id: ""
	I0819 20:14:46.275963  487755 logs.go:276] 0 containers: []
	W0819 20:14:46.275974  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:46.275982  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:46.276053  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:46.311106  487755 cri.go:89] found id: ""
	I0819 20:14:46.311137  487755 logs.go:276] 0 containers: []
	W0819 20:14:46.311149  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:46.311156  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:46.311226  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:46.348442  487755 cri.go:89] found id: ""
	I0819 20:14:46.348473  487755 logs.go:276] 0 containers: []
	W0819 20:14:46.348484  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:46.348493  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:46.348555  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:46.383060  487755 cri.go:89] found id: ""
	I0819 20:14:46.383097  487755 logs.go:276] 0 containers: []
	W0819 20:14:46.383110  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:46.383118  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:46.383191  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:46.416608  487755 cri.go:89] found id: ""
	I0819 20:14:46.416640  487755 logs.go:276] 0 containers: []
	W0819 20:14:46.416649  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:46.416659  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:46.416672  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:46.492014  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:46.492059  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:14:46.534647  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:46.534676  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:46.584594  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:46.584644  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:46.598494  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:46.598529  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:46.663341  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:46.795159  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:49.291911  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:46.229321  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:48.229835  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:49.163671  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:49.176475  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:49.176550  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:49.210283  487755 cri.go:89] found id: ""
	I0819 20:14:49.210311  487755 logs.go:276] 0 containers: []
	W0819 20:14:49.210322  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:49.210329  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:49.210399  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:49.244509  487755 cri.go:89] found id: ""
	I0819 20:14:49.244550  487755 logs.go:276] 0 containers: []
	W0819 20:14:49.244560  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:49.244566  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:49.244624  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:49.278187  487755 cri.go:89] found id: ""
	I0819 20:14:49.278213  487755 logs.go:276] 0 containers: []
	W0819 20:14:49.278222  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:49.278229  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:49.278281  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:49.313536  487755 cri.go:89] found id: ""
	I0819 20:14:49.313564  487755 logs.go:276] 0 containers: []
	W0819 20:14:49.313573  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:49.313580  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:49.313643  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:49.347878  487755 cri.go:89] found id: ""
	I0819 20:14:49.347911  487755 logs.go:276] 0 containers: []
	W0819 20:14:49.347922  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:49.347932  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:49.347993  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:49.381999  487755 cri.go:89] found id: ""
	I0819 20:14:49.382033  487755 logs.go:276] 0 containers: []
	W0819 20:14:49.382045  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:49.382054  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:49.382133  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:49.417032  487755 cri.go:89] found id: ""
	I0819 20:14:49.417059  487755 logs.go:276] 0 containers: []
	W0819 20:14:49.417068  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:49.417073  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:49.417155  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:49.453511  487755 cri.go:89] found id: ""
	I0819 20:14:49.453543  487755 logs.go:276] 0 containers: []
	W0819 20:14:49.453553  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:49.453573  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:49.453591  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:49.504597  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:49.504638  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:49.518486  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:49.518516  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:49.588800  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:49.588842  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:49.588859  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:49.666595  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:49.666643  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:14:52.204710  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:52.217814  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:52.217893  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:52.253745  487755 cri.go:89] found id: ""
	I0819 20:14:52.253785  487755 logs.go:276] 0 containers: []
	W0819 20:14:52.253797  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:52.253806  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:52.253879  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:52.288662  487755 cri.go:89] found id: ""
	I0819 20:14:52.288696  487755 logs.go:276] 0 containers: []
	W0819 20:14:52.288706  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:52.288714  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:52.288783  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:52.330971  487755 cri.go:89] found id: ""
	I0819 20:14:52.331002  487755 logs.go:276] 0 containers: []
	W0819 20:14:52.331010  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:52.331017  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:52.331080  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:52.364263  487755 cri.go:89] found id: ""
	I0819 20:14:52.364290  487755 logs.go:276] 0 containers: []
	W0819 20:14:52.364298  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:52.364304  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:52.364368  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:52.397725  487755 cri.go:89] found id: ""
	I0819 20:14:52.397760  487755 logs.go:276] 0 containers: []
	W0819 20:14:52.397770  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:52.397777  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:52.397831  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:52.432424  487755 cri.go:89] found id: ""
	I0819 20:14:52.432451  487755 logs.go:276] 0 containers: []
	W0819 20:14:52.432460  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:52.432466  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:52.432522  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:52.465427  487755 cri.go:89] found id: ""
	I0819 20:14:52.465458  487755 logs.go:276] 0 containers: []
	W0819 20:14:52.465468  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:52.465473  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:52.465548  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:52.501630  487755 cri.go:89] found id: ""
	I0819 20:14:52.501663  487755 logs.go:276] 0 containers: []
	W0819 20:14:52.501674  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:52.501687  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:52.501704  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:52.514694  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:52.514731  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:52.584533  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:52.584560  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:52.584579  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:52.663774  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:52.663816  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:14:52.701730  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:52.701768  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:51.791112  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:53.791351  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:50.728428  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:52.729914  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:55.251335  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:55.265565  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:55.265652  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:55.301716  487755 cri.go:89] found id: ""
	I0819 20:14:55.301745  487755 logs.go:276] 0 containers: []
	W0819 20:14:55.301753  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:55.301765  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:55.301825  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:55.336547  487755 cri.go:89] found id: ""
	I0819 20:14:55.336585  487755 logs.go:276] 0 containers: []
	W0819 20:14:55.336594  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:55.336599  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:55.336661  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:55.371823  487755 cri.go:89] found id: ""
	I0819 20:14:55.371857  487755 logs.go:276] 0 containers: []
	W0819 20:14:55.371865  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:55.371871  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:55.371928  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:55.406412  487755 cri.go:89] found id: ""
	I0819 20:14:55.406449  487755 logs.go:276] 0 containers: []
	W0819 20:14:55.406461  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:55.406469  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:55.406554  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:55.441762  487755 cri.go:89] found id: ""
	I0819 20:14:55.441791  487755 logs.go:276] 0 containers: []
	W0819 20:14:55.441800  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:55.441808  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:55.441882  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:55.475417  487755 cri.go:89] found id: ""
	I0819 20:14:55.475452  487755 logs.go:276] 0 containers: []
	W0819 20:14:55.475467  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:55.475475  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:55.475554  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:55.509040  487755 cri.go:89] found id: ""
	I0819 20:14:55.509071  487755 logs.go:276] 0 containers: []
	W0819 20:14:55.509082  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:55.509088  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:55.509174  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:55.544755  487755 cri.go:89] found id: ""
	I0819 20:14:55.544784  487755 logs.go:276] 0 containers: []
	W0819 20:14:55.544802  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:55.544815  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:55.544832  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:55.601759  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:55.601803  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:55.615062  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:55.615099  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:55.685887  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:55.685921  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:55.685947  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:55.769156  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:55.769216  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:14:55.792678  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:58.291483  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:55.229276  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:57.728266  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:59.729109  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:14:58.307328  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:14:58.321952  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:14:58.322040  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:14:58.357013  487755 cri.go:89] found id: ""
	I0819 20:14:58.357040  487755 logs.go:276] 0 containers: []
	W0819 20:14:58.357048  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:14:58.357054  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:14:58.357108  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:14:58.391452  487755 cri.go:89] found id: ""
	I0819 20:14:58.391542  487755 logs.go:276] 0 containers: []
	W0819 20:14:58.391559  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:14:58.391573  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:14:58.391647  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:14:58.425927  487755 cri.go:89] found id: ""
	I0819 20:14:58.425960  487755 logs.go:276] 0 containers: []
	W0819 20:14:58.425972  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:14:58.425980  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:14:58.426045  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:14:58.464814  487755 cri.go:89] found id: ""
	I0819 20:14:58.464847  487755 logs.go:276] 0 containers: []
	W0819 20:14:58.464854  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:14:58.464864  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:14:58.464918  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:14:58.498404  487755 cri.go:89] found id: ""
	I0819 20:14:58.498440  487755 logs.go:276] 0 containers: []
	W0819 20:14:58.498448  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:14:58.498456  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:14:58.498518  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:14:58.533724  487755 cri.go:89] found id: ""
	I0819 20:14:58.533762  487755 logs.go:276] 0 containers: []
	W0819 20:14:58.533774  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:14:58.533784  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:14:58.533855  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:14:58.569334  487755 cri.go:89] found id: ""
	I0819 20:14:58.569372  487755 logs.go:276] 0 containers: []
	W0819 20:14:58.569384  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:14:58.569393  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:14:58.569465  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:14:58.603409  487755 cri.go:89] found id: ""
	I0819 20:14:58.603469  487755 logs.go:276] 0 containers: []
	W0819 20:14:58.603481  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:14:58.603494  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:14:58.603511  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:14:58.655353  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:14:58.655395  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:14:58.668821  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:14:58.668855  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:14:58.736143  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:14:58.736171  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:14:58.736191  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:14:58.818469  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:14:58.818514  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:15:01.357992  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:15:01.370988  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:15:01.371063  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:15:01.407567  487755 cri.go:89] found id: ""
	I0819 20:15:01.407596  487755 logs.go:276] 0 containers: []
	W0819 20:15:01.407608  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:15:01.407615  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:15:01.407698  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:15:01.441795  487755 cri.go:89] found id: ""
	I0819 20:15:01.441825  487755 logs.go:276] 0 containers: []
	W0819 20:15:01.441834  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:15:01.441839  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:15:01.441893  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:15:01.476702  487755 cri.go:89] found id: ""
	I0819 20:15:01.476736  487755 logs.go:276] 0 containers: []
	W0819 20:15:01.476745  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:15:01.476751  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:15:01.476819  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:15:01.511944  487755 cri.go:89] found id: ""
	I0819 20:15:01.511977  487755 logs.go:276] 0 containers: []
	W0819 20:15:01.511985  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:15:01.511993  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:15:01.512064  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:15:01.546143  487755 cri.go:89] found id: ""
	I0819 20:15:01.546175  487755 logs.go:276] 0 containers: []
	W0819 20:15:01.546184  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:15:01.546189  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:15:01.546259  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:15:01.580107  487755 cri.go:89] found id: ""
	I0819 20:15:01.580144  487755 logs.go:276] 0 containers: []
	W0819 20:15:01.580158  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:15:01.580169  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:15:01.580244  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:15:01.618483  487755 cri.go:89] found id: ""
	I0819 20:15:01.618514  487755 logs.go:276] 0 containers: []
	W0819 20:15:01.618522  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:15:01.618529  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:15:01.618599  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:15:01.651250  487755 cri.go:89] found id: ""
	I0819 20:15:01.651276  487755 logs.go:276] 0 containers: []
	W0819 20:15:01.651284  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:15:01.651293  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:15:01.651305  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:15:01.731781  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:15:01.731820  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:15:01.771070  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:15:01.771099  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:15:01.824526  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:15:01.824569  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:15:01.838231  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:15:01.838267  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:15:01.906420  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:15:00.791569  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:02.792018  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:04.792881  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:02.229453  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:04.728600  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:04.406678  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:15:04.420144  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:15:04.420227  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:15:04.455455  487755 cri.go:89] found id: ""
	I0819 20:15:04.455484  487755 logs.go:276] 0 containers: []
	W0819 20:15:04.455492  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:15:04.455498  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:15:04.455555  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:15:04.488279  487755 cri.go:89] found id: ""
	I0819 20:15:04.488309  487755 logs.go:276] 0 containers: []
	W0819 20:15:04.488319  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:15:04.488324  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:15:04.488377  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:15:04.522083  487755 cri.go:89] found id: ""
	I0819 20:15:04.522114  487755 logs.go:276] 0 containers: []
	W0819 20:15:04.522123  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:15:04.522129  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:15:04.522183  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:15:04.558822  487755 cri.go:89] found id: ""
	I0819 20:15:04.558853  487755 logs.go:276] 0 containers: []
	W0819 20:15:04.558861  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:15:04.558868  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:15:04.558924  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:15:04.593522  487755 cri.go:89] found id: ""
	I0819 20:15:04.593550  487755 logs.go:276] 0 containers: []
	W0819 20:15:04.593561  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:15:04.593569  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:15:04.593643  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:15:04.627509  487755 cri.go:89] found id: ""
	I0819 20:15:04.627538  487755 logs.go:276] 0 containers: []
	W0819 20:15:04.627547  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:15:04.627553  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:15:04.627618  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:15:04.659113  487755 cri.go:89] found id: ""
	I0819 20:15:04.659147  487755 logs.go:276] 0 containers: []
	W0819 20:15:04.659159  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:15:04.659168  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:15:04.659247  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:15:04.692401  487755 cri.go:89] found id: ""
	I0819 20:15:04.692432  487755 logs.go:276] 0 containers: []
	W0819 20:15:04.692441  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:15:04.692453  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:15:04.692466  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:15:04.706438  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:15:04.706487  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:15:04.774813  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:15:04.774839  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:15:04.774852  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:15:04.857820  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:15:04.857867  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:15:04.897599  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:15:04.897630  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:15:07.448250  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:15:07.461576  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:15:07.461645  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:15:07.496546  487755 cri.go:89] found id: ""
	I0819 20:15:07.496580  487755 logs.go:276] 0 containers: []
	W0819 20:15:07.496590  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:15:07.496596  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:15:07.496654  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:15:07.532052  487755 cri.go:89] found id: ""
	I0819 20:15:07.532089  487755 logs.go:276] 0 containers: []
	W0819 20:15:07.532099  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:15:07.532108  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:15:07.532204  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:15:07.565974  487755 cri.go:89] found id: ""
	I0819 20:15:07.566000  487755 logs.go:276] 0 containers: []
	W0819 20:15:07.566009  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:15:07.566015  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:15:07.566082  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:15:07.600234  487755 cri.go:89] found id: ""
	I0819 20:15:07.600265  487755 logs.go:276] 0 containers: []
	W0819 20:15:07.600276  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:15:07.600285  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:15:07.600354  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:15:07.638872  487755 cri.go:89] found id: ""
	I0819 20:15:07.638899  487755 logs.go:276] 0 containers: []
	W0819 20:15:07.638907  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:15:07.638913  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:15:07.638968  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:15:07.673589  487755 cri.go:89] found id: ""
	I0819 20:15:07.673618  487755 logs.go:276] 0 containers: []
	W0819 20:15:07.673626  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:15:07.673635  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:15:07.673687  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:15:07.708850  487755 cri.go:89] found id: ""
	I0819 20:15:07.708879  487755 logs.go:276] 0 containers: []
	W0819 20:15:07.708887  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:15:07.708892  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:15:07.708946  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:15:07.743731  487755 cri.go:89] found id: ""
	I0819 20:15:07.743765  487755 logs.go:276] 0 containers: []
	W0819 20:15:07.743777  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:15:07.743798  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:15:07.743816  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:15:07.792787  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:15:07.792825  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:15:07.806458  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:15:07.806493  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:15:07.875813  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:15:07.875839  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:15:07.875856  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:15:07.290689  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:09.291377  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:07.228740  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:09.230129  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:07.953512  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:15:07.953567  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:15:10.493540  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:15:10.506669  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:15:10.506750  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:15:10.540533  487755 cri.go:89] found id: ""
	I0819 20:15:10.540563  487755 logs.go:276] 0 containers: []
	W0819 20:15:10.540575  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:15:10.540582  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:15:10.540655  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:15:10.576729  487755 cri.go:89] found id: ""
	I0819 20:15:10.576756  487755 logs.go:276] 0 containers: []
	W0819 20:15:10.576765  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:15:10.576770  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:15:10.576832  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:15:10.614903  487755 cri.go:89] found id: ""
	I0819 20:15:10.614935  487755 logs.go:276] 0 containers: []
	W0819 20:15:10.614947  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:15:10.614954  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:15:10.615032  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:15:10.652716  487755 cri.go:89] found id: ""
	I0819 20:15:10.652758  487755 logs.go:276] 0 containers: []
	W0819 20:15:10.652778  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:15:10.652788  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:15:10.652861  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:15:10.686012  487755 cri.go:89] found id: ""
	I0819 20:15:10.686050  487755 logs.go:276] 0 containers: []
	W0819 20:15:10.686063  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:15:10.686071  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:15:10.686141  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:15:10.718846  487755 cri.go:89] found id: ""
	I0819 20:15:10.718876  487755 logs.go:276] 0 containers: []
	W0819 20:15:10.718891  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:15:10.718902  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:15:10.718967  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:15:10.754311  487755 cri.go:89] found id: ""
	I0819 20:15:10.754342  487755 logs.go:276] 0 containers: []
	W0819 20:15:10.754351  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:15:10.754363  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:15:10.754419  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:15:10.789758  487755 cri.go:89] found id: ""
	I0819 20:15:10.789789  487755 logs.go:276] 0 containers: []
	W0819 20:15:10.789801  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:15:10.789812  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:15:10.789831  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:15:10.844090  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:15:10.844128  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:15:10.857925  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:15:10.857958  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:15:10.929226  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:15:10.929248  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:15:10.929262  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:15:11.008438  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:15:11.008478  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:15:11.291960  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:13.296437  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:11.728970  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:13.729253  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:13.547136  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:15:13.559960  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:15:13.560050  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:15:13.598569  487755 cri.go:89] found id: ""
	I0819 20:15:13.598600  487755 logs.go:276] 0 containers: []
	W0819 20:15:13.598611  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:15:13.598620  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:15:13.598681  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:15:13.634197  487755 cri.go:89] found id: ""
	I0819 20:15:13.634232  487755 logs.go:276] 0 containers: []
	W0819 20:15:13.634241  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:15:13.634260  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:15:13.634344  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:15:13.679461  487755 cri.go:89] found id: ""
	I0819 20:15:13.679490  487755 logs.go:276] 0 containers: []
	W0819 20:15:13.679502  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:15:13.679509  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:15:13.679593  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:15:13.717147  487755 cri.go:89] found id: ""
	I0819 20:15:13.717194  487755 logs.go:276] 0 containers: []
	W0819 20:15:13.717206  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:15:13.717215  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:15:13.717272  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:15:13.751161  487755 cri.go:89] found id: ""
	I0819 20:15:13.751195  487755 logs.go:276] 0 containers: []
	W0819 20:15:13.751214  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:15:13.751221  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:15:13.751279  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:15:13.785497  487755 cri.go:89] found id: ""
	I0819 20:15:13.785527  487755 logs.go:276] 0 containers: []
	W0819 20:15:13.785538  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:15:13.785546  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:15:13.785612  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:15:13.822239  487755 cri.go:89] found id: ""
	I0819 20:15:13.822274  487755 logs.go:276] 0 containers: []
	W0819 20:15:13.822286  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:15:13.822295  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:15:13.822361  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:15:13.857248  487755 cri.go:89] found id: ""
	I0819 20:15:13.857283  487755 logs.go:276] 0 containers: []
	W0819 20:15:13.857295  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:15:13.857308  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:15:13.857326  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:15:13.932255  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:15:13.932279  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:15:13.932292  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:15:14.009767  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:15:14.009816  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:15:14.050400  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:15:14.050434  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:15:14.100936  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:15:14.100980  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:15:16.615903  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:15:16.629802  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:15:16.629894  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:15:16.665224  487755 cri.go:89] found id: ""
	I0819 20:15:16.665254  487755 logs.go:276] 0 containers: []
	W0819 20:15:16.665265  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:15:16.665274  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:15:16.665337  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:15:16.702983  487755 cri.go:89] found id: ""
	I0819 20:15:16.703013  487755 logs.go:276] 0 containers: []
	W0819 20:15:16.703025  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:15:16.703032  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:15:16.703099  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:15:16.738030  487755 cri.go:89] found id: ""
	I0819 20:15:16.738056  487755 logs.go:276] 0 containers: []
	W0819 20:15:16.738065  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:15:16.738071  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:15:16.738133  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:15:16.772118  487755 cri.go:89] found id: ""
	I0819 20:15:16.772157  487755 logs.go:276] 0 containers: []
	W0819 20:15:16.772170  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:15:16.772178  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:15:16.772240  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:15:16.806665  487755 cri.go:89] found id: ""
	I0819 20:15:16.806696  487755 logs.go:276] 0 containers: []
	W0819 20:15:16.806705  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:15:16.806711  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:15:16.806775  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:15:16.840746  487755 cri.go:89] found id: ""
	I0819 20:15:16.840785  487755 logs.go:276] 0 containers: []
	W0819 20:15:16.840798  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:15:16.840807  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:15:16.840882  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:15:16.876181  487755 cri.go:89] found id: ""
	I0819 20:15:16.876208  487755 logs.go:276] 0 containers: []
	W0819 20:15:16.876217  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:15:16.876229  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:15:16.876285  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:15:16.909008  487755 cri.go:89] found id: ""
	I0819 20:15:16.909035  487755 logs.go:276] 0 containers: []
	W0819 20:15:16.909042  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:15:16.909052  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:15:16.909065  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:15:16.962408  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:15:16.962453  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:15:16.977734  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:15:16.977767  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:15:17.044762  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:15:17.044785  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:15:17.044804  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:15:17.121114  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:15:17.121190  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:15:15.791384  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:18.291574  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:15.729727  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:18.228989  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:19.660695  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:15:19.673624  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:15:19.673714  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:15:19.708446  487755 cri.go:89] found id: ""
	I0819 20:15:19.708475  487755 logs.go:276] 0 containers: []
	W0819 20:15:19.708486  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:15:19.708496  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:15:19.708560  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:15:19.742706  487755 cri.go:89] found id: ""
	I0819 20:15:19.742735  487755 logs.go:276] 0 containers: []
	W0819 20:15:19.742746  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:15:19.742753  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:15:19.742827  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:15:19.777264  487755 cri.go:89] found id: ""
	I0819 20:15:19.777294  487755 logs.go:276] 0 containers: []
	W0819 20:15:19.777306  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:15:19.777314  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:15:19.777385  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:15:19.812994  487755 cri.go:89] found id: ""
	I0819 20:15:19.813024  487755 logs.go:276] 0 containers: []
	W0819 20:15:19.813035  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:15:19.813054  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:15:19.813153  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:15:19.848448  487755 cri.go:89] found id: ""
	I0819 20:15:19.848482  487755 logs.go:276] 0 containers: []
	W0819 20:15:19.848491  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:15:19.848498  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:15:19.848567  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:15:19.883875  487755 cri.go:89] found id: ""
	I0819 20:15:19.883909  487755 logs.go:276] 0 containers: []
	W0819 20:15:19.883922  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:15:19.883930  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:15:19.883996  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:15:19.919470  487755 cri.go:89] found id: ""
	I0819 20:15:19.919496  487755 logs.go:276] 0 containers: []
	W0819 20:15:19.919505  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:15:19.919511  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:15:19.919576  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:15:19.959903  487755 cri.go:89] found id: ""
	I0819 20:15:19.959937  487755 logs.go:276] 0 containers: []
	W0819 20:15:19.959950  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:15:19.959964  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:15:19.959985  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:15:20.010473  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:15:20.010514  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:15:20.024535  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:15:20.024569  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:15:20.101066  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:15:20.101096  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:15:20.101113  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:15:20.181225  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:15:20.181264  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:15:22.728224  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:15:22.741385  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:15:22.741449  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:15:22.776048  487755 cri.go:89] found id: ""
	I0819 20:15:22.776082  487755 logs.go:276] 0 containers: []
	W0819 20:15:22.776092  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:15:22.776098  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:15:22.776153  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:15:22.812341  487755 cri.go:89] found id: ""
	I0819 20:15:22.812379  487755 logs.go:276] 0 containers: []
	W0819 20:15:22.812391  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:15:22.812398  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:15:22.812462  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:15:22.849217  487755 cri.go:89] found id: ""
	I0819 20:15:22.849246  487755 logs.go:276] 0 containers: []
	W0819 20:15:22.849258  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:15:22.849266  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:15:22.849336  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:15:22.883647  487755 cri.go:89] found id: ""
	I0819 20:15:22.883675  487755 logs.go:276] 0 containers: []
	W0819 20:15:22.883686  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:15:22.883695  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:15:22.883766  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:15:22.921062  487755 cri.go:89] found id: ""
	I0819 20:15:22.921098  487755 logs.go:276] 0 containers: []
	W0819 20:15:22.921109  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:15:22.921116  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:15:22.921198  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:15:20.791476  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:22.792604  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:20.229373  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:22.232223  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:24.728891  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:22.956457  487755 cri.go:89] found id: ""
	I0819 20:15:22.956494  487755 logs.go:276] 0 containers: []
	W0819 20:15:22.956506  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:15:22.956515  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:15:22.956598  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:15:22.993155  487755 cri.go:89] found id: ""
	I0819 20:15:22.993187  487755 logs.go:276] 0 containers: []
	W0819 20:15:22.993198  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:15:22.993207  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:15:22.993279  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:15:23.029811  487755 cri.go:89] found id: ""
	I0819 20:15:23.029841  487755 logs.go:276] 0 containers: []
	W0819 20:15:23.029853  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:15:23.029867  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:15:23.029883  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:15:23.068777  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:15:23.068817  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:15:23.118797  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:15:23.118841  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:15:23.132345  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:15:23.132377  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:15:23.205041  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:15:23.205066  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:15:23.205082  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:15:25.791892  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:15:25.804934  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:15:25.805028  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:15:25.844589  487755 cri.go:89] found id: ""
	I0819 20:15:25.844617  487755 logs.go:276] 0 containers: []
	W0819 20:15:25.844628  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:15:25.844636  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:15:25.844703  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:15:25.892776  487755 cri.go:89] found id: ""
	I0819 20:15:25.892804  487755 logs.go:276] 0 containers: []
	W0819 20:15:25.892811  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:15:25.892817  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:15:25.892881  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:15:25.948525  487755 cri.go:89] found id: ""
	I0819 20:15:25.948555  487755 logs.go:276] 0 containers: []
	W0819 20:15:25.948566  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:15:25.948575  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:15:25.948646  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:15:25.989017  487755 cri.go:89] found id: ""
	I0819 20:15:25.989050  487755 logs.go:276] 0 containers: []
	W0819 20:15:25.989062  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:15:25.989079  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:15:25.989165  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:15:26.028428  487755 cri.go:89] found id: ""
	I0819 20:15:26.028455  487755 logs.go:276] 0 containers: []
	W0819 20:15:26.028463  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:15:26.028470  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:15:26.028531  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:15:26.062075  487755 cri.go:89] found id: ""
	I0819 20:15:26.062105  487755 logs.go:276] 0 containers: []
	W0819 20:15:26.062113  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:15:26.062120  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:15:26.062173  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:15:26.096698  487755 cri.go:89] found id: ""
	I0819 20:15:26.096730  487755 logs.go:276] 0 containers: []
	W0819 20:15:26.096739  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:15:26.096745  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:15:26.096814  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:15:26.131127  487755 cri.go:89] found id: ""
	I0819 20:15:26.131163  487755 logs.go:276] 0 containers: []
	W0819 20:15:26.131173  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:15:26.131183  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:15:26.131195  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:15:26.170740  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:15:26.170775  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:15:26.221853  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:15:26.221901  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:15:26.236571  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:15:26.236605  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:15:26.307834  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:15:26.307856  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:15:26.307870  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:15:25.291487  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:27.791641  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:29.793608  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:26.728979  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:28.729231  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:28.889815  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:15:28.903131  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:15:28.903212  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:15:28.938139  487755 cri.go:89] found id: ""
	I0819 20:15:28.938165  487755 logs.go:276] 0 containers: []
	W0819 20:15:28.938179  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:15:28.938187  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:15:28.938247  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:15:28.973293  487755 cri.go:89] found id: ""
	I0819 20:15:28.973321  487755 logs.go:276] 0 containers: []
	W0819 20:15:28.973334  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:15:28.973342  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:15:28.973411  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:15:29.010505  487755 cri.go:89] found id: ""
	I0819 20:15:29.010536  487755 logs.go:276] 0 containers: []
	W0819 20:15:29.010549  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:15:29.010557  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:15:29.010625  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:15:29.047597  487755 cri.go:89] found id: ""
	I0819 20:15:29.047628  487755 logs.go:276] 0 containers: []
	W0819 20:15:29.047640  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:15:29.047648  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:15:29.047748  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:15:29.084524  487755 cri.go:89] found id: ""
	I0819 20:15:29.084556  487755 logs.go:276] 0 containers: []
	W0819 20:15:29.084566  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:15:29.084572  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:15:29.084645  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:15:29.117316  487755 cri.go:89] found id: ""
	I0819 20:15:29.117343  487755 logs.go:276] 0 containers: []
	W0819 20:15:29.117360  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:15:29.117369  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:15:29.117434  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:15:29.150230  487755 cri.go:89] found id: ""
	I0819 20:15:29.150258  487755 logs.go:276] 0 containers: []
	W0819 20:15:29.150266  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:15:29.150272  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:15:29.150337  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:15:29.183816  487755 cri.go:89] found id: ""
	I0819 20:15:29.183848  487755 logs.go:276] 0 containers: []
	W0819 20:15:29.183856  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:15:29.183866  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:15:29.183878  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:15:29.224599  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:15:29.224642  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:15:29.276528  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:15:29.276573  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:15:29.292050  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:15:29.292081  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:15:29.362993  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:15:29.363016  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:15:29.363035  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:15:31.940622  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:15:31.953384  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:15:31.953459  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:15:31.985660  487755 cri.go:89] found id: ""
	I0819 20:15:31.985690  487755 logs.go:276] 0 containers: []
	W0819 20:15:31.985700  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:15:31.985707  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:15:31.985775  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:15:32.020315  487755 cri.go:89] found id: ""
	I0819 20:15:32.020346  487755 logs.go:276] 0 containers: []
	W0819 20:15:32.020355  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:15:32.020360  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:15:32.020420  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:15:32.054737  487755 cri.go:89] found id: ""
	I0819 20:15:32.054770  487755 logs.go:276] 0 containers: []
	W0819 20:15:32.054779  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:15:32.054785  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:15:32.054855  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:15:32.091995  487755 cri.go:89] found id: ""
	I0819 20:15:32.092031  487755 logs.go:276] 0 containers: []
	W0819 20:15:32.092041  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:15:32.092047  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:15:32.092101  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:15:32.124859  487755 cri.go:89] found id: ""
	I0819 20:15:32.124891  487755 logs.go:276] 0 containers: []
	W0819 20:15:32.124903  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:15:32.124911  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:15:32.124987  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:15:32.157716  487755 cri.go:89] found id: ""
	I0819 20:15:32.157750  487755 logs.go:276] 0 containers: []
	W0819 20:15:32.157759  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:15:32.157765  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:15:32.157822  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:15:32.193877  487755 cri.go:89] found id: ""
	I0819 20:15:32.193917  487755 logs.go:276] 0 containers: []
	W0819 20:15:32.193928  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:15:32.193938  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:15:32.194016  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:15:32.228498  487755 cri.go:89] found id: ""
	I0819 20:15:32.228525  487755 logs.go:276] 0 containers: []
	W0819 20:15:32.228534  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:15:32.228545  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:15:32.228564  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:15:32.278289  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:15:32.278331  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:15:32.293008  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:15:32.293035  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:15:32.370272  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:15:32.370297  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:15:32.370312  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:15:32.451775  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:15:32.451825  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:15:32.291517  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:34.291851  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:31.232934  487175 pod_ready.go:103] pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:31.723093  487175 pod_ready.go:82] duration metric: took 4m0.000786925s for pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace to be "Ready" ...
	E0819 20:15:31.723126  487175 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-9shzw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 20:15:31.723160  487175 pod_ready.go:39] duration metric: took 4m5.032598764s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 20:15:31.723199  487175 kubeadm.go:597] duration metric: took 4m14.079129844s to restartPrimaryControlPlane
	W0819 20:15:31.723263  487175 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 20:15:31.723306  487175 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 20:15:34.989883  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:15:35.002492  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:15:35.002580  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:15:35.037080  487755 cri.go:89] found id: ""
	I0819 20:15:35.037116  487755 logs.go:276] 0 containers: []
	W0819 20:15:35.037145  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:15:35.037156  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:15:35.037225  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:15:35.070003  487755 cri.go:89] found id: ""
	I0819 20:15:35.070037  487755 logs.go:276] 0 containers: []
	W0819 20:15:35.070049  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:15:35.070058  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:15:35.070133  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:15:35.103681  487755 cri.go:89] found id: ""
	I0819 20:15:35.103726  487755 logs.go:276] 0 containers: []
	W0819 20:15:35.103741  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:15:35.103750  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:15:35.103855  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:15:35.138558  487755 cri.go:89] found id: ""
	I0819 20:15:35.138595  487755 logs.go:276] 0 containers: []
	W0819 20:15:35.138608  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:15:35.138615  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:15:35.138692  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:15:35.174451  487755 cri.go:89] found id: ""
	I0819 20:15:35.174483  487755 logs.go:276] 0 containers: []
	W0819 20:15:35.174492  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:15:35.174498  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:15:35.174556  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:15:35.208703  487755 cri.go:89] found id: ""
	I0819 20:15:35.208731  487755 logs.go:276] 0 containers: []
	W0819 20:15:35.208740  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:15:35.208746  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:15:35.208805  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:15:35.244323  487755 cri.go:89] found id: ""
	I0819 20:15:35.244352  487755 logs.go:276] 0 containers: []
	W0819 20:15:35.244361  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:15:35.244366  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:15:35.244430  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:15:35.279338  487755 cri.go:89] found id: ""
	I0819 20:15:35.279373  487755 logs.go:276] 0 containers: []
	W0819 20:15:35.279385  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:15:35.279398  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:15:35.279415  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:15:35.332518  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:15:35.332560  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:15:35.347409  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:15:35.347441  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:15:35.422987  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:15:35.423011  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:15:35.423029  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:15:35.505543  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:15:35.505591  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:15:36.291931  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:38.292382  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:38.051606  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:15:38.064696  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:15:38.064773  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:15:38.099197  487755 cri.go:89] found id: ""
	I0819 20:15:38.099232  487755 logs.go:276] 0 containers: []
	W0819 20:15:38.099245  487755 logs.go:278] No container was found matching "kube-apiserver"
	I0819 20:15:38.099253  487755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:15:38.099325  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:15:38.133700  487755 cri.go:89] found id: ""
	I0819 20:15:38.133730  487755 logs.go:276] 0 containers: []
	W0819 20:15:38.133742  487755 logs.go:278] No container was found matching "etcd"
	I0819 20:15:38.133754  487755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:15:38.133808  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:15:38.168305  487755 cri.go:89] found id: ""
	I0819 20:15:38.168335  487755 logs.go:276] 0 containers: []
	W0819 20:15:38.168345  487755 logs.go:278] No container was found matching "coredns"
	I0819 20:15:38.168355  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:15:38.168423  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:15:38.202582  487755 cri.go:89] found id: ""
	I0819 20:15:38.202614  487755 logs.go:276] 0 containers: []
	W0819 20:15:38.202623  487755 logs.go:278] No container was found matching "kube-scheduler"
	I0819 20:15:38.202630  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:15:38.202701  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:15:38.239824  487755 cri.go:89] found id: ""
	I0819 20:15:38.239858  487755 logs.go:276] 0 containers: []
	W0819 20:15:38.239870  487755 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:15:38.239879  487755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:15:38.239954  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:15:38.275647  487755 cri.go:89] found id: ""
	I0819 20:15:38.275674  487755 logs.go:276] 0 containers: []
	W0819 20:15:38.275683  487755 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 20:15:38.275691  487755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:15:38.275746  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:15:38.309594  487755 cri.go:89] found id: ""
	I0819 20:15:38.309621  487755 logs.go:276] 0 containers: []
	W0819 20:15:38.309630  487755 logs.go:278] No container was found matching "kindnet"
	I0819 20:15:38.309636  487755 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 20:15:38.309691  487755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 20:15:38.348080  487755 cri.go:89] found id: ""
	I0819 20:15:38.348120  487755 logs.go:276] 0 containers: []
	W0819 20:15:38.348129  487755 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 20:15:38.348139  487755 logs.go:123] Gathering logs for kubelet ...
	I0819 20:15:38.348151  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:15:38.396700  487755 logs.go:123] Gathering logs for dmesg ...
	I0819 20:15:38.396742  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:15:38.410053  487755 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:15:38.410083  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:15:38.473272  487755 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:15:38.473296  487755 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:15:38.473310  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:15:38.553948  487755 logs.go:123] Gathering logs for container status ...
	I0819 20:15:38.553991  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:15:41.092184  487755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:15:41.105044  487755 kubeadm.go:597] duration metric: took 4m1.931910887s to restartPrimaryControlPlane
	W0819 20:15:41.105119  487755 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 20:15:41.105159  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 20:15:42.463500  487755 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.358311562s)
	I0819 20:15:42.463615  487755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:15:42.478312  487755 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 20:15:42.488270  487755 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 20:15:42.498302  487755 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 20:15:42.498328  487755 kubeadm.go:157] found existing configuration files:
	
	I0819 20:15:42.498378  487755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 20:15:42.507790  487755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 20:15:42.507867  487755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 20:15:42.517876  487755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 20:15:42.527763  487755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 20:15:42.527841  487755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 20:15:42.537469  487755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 20:15:42.546676  487755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 20:15:42.546765  487755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 20:15:42.556786  487755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 20:15:42.565894  487755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 20:15:42.565962  487755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 20:15:42.576576  487755 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 20:15:42.644056  487755 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 20:15:42.644173  487755 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 20:15:42.787126  487755 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 20:15:42.787296  487755 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 20:15:42.787447  487755 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 20:15:42.969714  487755 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 20:15:42.971192  487755 out.go:235]   - Generating certificates and keys ...
	I0819 20:15:42.972813  487755 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 20:15:42.972910  487755 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 20:15:42.973004  487755 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 20:15:42.973087  487755 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 20:15:42.973229  487755 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 20:15:42.973313  487755 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 20:15:42.973405  487755 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 20:15:42.973492  487755 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 20:15:42.973622  487755 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 20:15:42.974099  487755 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 20:15:42.974204  487755 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 20:15:42.974273  487755 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 20:15:43.100006  487755 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 20:15:43.235588  487755 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 20:15:43.661678  487755 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 20:15:43.709543  487755 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 20:15:43.724401  487755 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 20:15:43.725617  487755 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 20:15:43.725676  487755 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 20:15:43.868233  487755 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 20:15:40.790754  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:42.791451  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:44.791885  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:43.869893  487755 out.go:235]   - Booting up control plane ...
	I0819 20:15:43.870036  487755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 20:15:43.877806  487755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 20:15:43.881428  487755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 20:15:43.882197  487755 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 20:15:43.888361  487755 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 20:15:47.292325  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:49.791701  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:51.792120  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:54.290808  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:56.291341  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:58.292278  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:15:58.086606  487175 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.363267911s)
	I0819 20:15:58.086689  487175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:15:58.102421  487175 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 20:15:58.112946  487175 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 20:15:58.123171  487175 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 20:15:58.123195  487175 kubeadm.go:157] found existing configuration files:
	
	I0819 20:15:58.123244  487175 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 20:15:58.133204  487175 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 20:15:58.133276  487175 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 20:15:58.143853  487175 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 20:15:58.153986  487175 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 20:15:58.154055  487175 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 20:15:58.167137  487175 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 20:15:58.177160  487175 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 20:15:58.177244  487175 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 20:15:58.187502  487175 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 20:15:58.197845  487175 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 20:15:58.197933  487175 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 20:15:58.208325  487175 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 20:15:58.248682  487175 kubeadm.go:310] W0819 20:15:58.226302    2554 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 20:15:58.250770  487175 kubeadm.go:310] W0819 20:15:58.228411    2554 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 20:15:58.344210  487175 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 20:16:00.292316  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:16:02.791769  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:16:04.791957  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:16:06.541009  487175 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 20:16:06.541088  487175 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 20:16:06.541205  487175 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 20:16:06.541352  487175 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 20:16:06.541472  487175 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 20:16:06.541541  487175 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 20:16:06.543277  487175 out.go:235]   - Generating certificates and keys ...
	I0819 20:16:06.543380  487175 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 20:16:06.543466  487175 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 20:16:06.543587  487175 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 20:16:06.543691  487175 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 20:16:06.543765  487175 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 20:16:06.543849  487175 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 20:16:06.543944  487175 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 20:16:06.544025  487175 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 20:16:06.544157  487175 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 20:16:06.544238  487175 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 20:16:06.544271  487175 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 20:16:06.544338  487175 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 20:16:06.544414  487175 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 20:16:06.544516  487175 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 20:16:06.544581  487175 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 20:16:06.544634  487175 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 20:16:06.544687  487175 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 20:16:06.544767  487175 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 20:16:06.544827  487175 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 20:16:06.546338  487175 out.go:235]   - Booting up control plane ...
	I0819 20:16:06.546443  487175 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 20:16:06.546511  487175 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 20:16:06.546567  487175 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 20:16:06.546660  487175 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 20:16:06.546736  487175 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 20:16:06.546769  487175 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 20:16:06.546881  487175 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 20:16:06.547007  487175 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 20:16:06.547080  487175 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001002232s
	I0819 20:16:06.547179  487175 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 20:16:06.547263  487175 kubeadm.go:310] [api-check] The API server is healthy after 5.002999108s
	I0819 20:16:06.547401  487175 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 20:16:06.547594  487175 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 20:16:06.547685  487175 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 20:16:06.547945  487175 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-108534 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 20:16:06.548042  487175 kubeadm.go:310] [bootstrap-token] Using token: 76o64p.zbpj7ndkf01hokww
	I0819 20:16:06.549531  487175 out.go:235]   - Configuring RBAC rules ...
	I0819 20:16:06.549644  487175 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 20:16:06.549749  487175 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 20:16:06.549941  487175 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 20:16:06.550141  487175 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 20:16:06.550303  487175 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 20:16:06.550434  487175 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 20:16:06.550575  487175 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 20:16:06.550619  487175 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 20:16:06.550661  487175 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 20:16:06.550667  487175 kubeadm.go:310] 
	I0819 20:16:06.550715  487175 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 20:16:06.550721  487175 kubeadm.go:310] 
	I0819 20:16:06.550797  487175 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 20:16:06.550805  487175 kubeadm.go:310] 
	I0819 20:16:06.550826  487175 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 20:16:06.550902  487175 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 20:16:06.550966  487175 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 20:16:06.550975  487175 kubeadm.go:310] 
	I0819 20:16:06.551050  487175 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 20:16:06.551060  487175 kubeadm.go:310] 
	I0819 20:16:06.551103  487175 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 20:16:06.551109  487175 kubeadm.go:310] 
	I0819 20:16:06.551157  487175 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 20:16:06.551220  487175 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 20:16:06.551280  487175 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 20:16:06.551286  487175 kubeadm.go:310] 
	I0819 20:16:06.551389  487175 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 20:16:06.551462  487175 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 20:16:06.551468  487175 kubeadm.go:310] 
	I0819 20:16:06.551542  487175 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 76o64p.zbpj7ndkf01hokww \
	I0819 20:16:06.551627  487175 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f \
	I0819 20:16:06.551648  487175 kubeadm.go:310] 	--control-plane 
	I0819 20:16:06.551654  487175 kubeadm.go:310] 
	I0819 20:16:06.551727  487175 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 20:16:06.551734  487175 kubeadm.go:310] 
	I0819 20:16:06.551819  487175 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 76o64p.zbpj7ndkf01hokww \
	I0819 20:16:06.551933  487175 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f 
	I0819 20:16:06.551946  487175 cni.go:84] Creating CNI manager for ""
	I0819 20:16:06.551954  487175 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 20:16:06.553774  487175 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 20:16:07.291883  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:16:09.292409  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:16:06.555161  487175 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 20:16:06.565990  487175 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 20:16:06.590980  487175 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 20:16:06.591064  487175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-108534 minikube.k8s.io/updated_at=2024_08_19T20_16_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8 minikube.k8s.io/name=embed-certs-108534 minikube.k8s.io/primary=true
	I0819 20:16:06.591070  487175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:16:06.613664  487175 ops.go:34] apiserver oom_adj: -16
	I0819 20:16:06.780362  487175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:16:07.281295  487175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:16:07.780627  487175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:16:08.280550  487175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:16:08.780575  487175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:16:09.281108  487175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:16:09.780620  487175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:16:10.280700  487175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:16:10.780528  487175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:16:10.900887  487175 kubeadm.go:1113] duration metric: took 4.309897097s to wait for elevateKubeSystemPrivileges
	I0819 20:16:10.900927  487175 kubeadm.go:394] duration metric: took 4m53.306894656s to StartCluster
	I0819 20:16:10.900962  487175 settings.go:142] acquiring lock: {Name:mkc71def6be9966e5f008a22161bf3ed26f482b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:16:10.901054  487175 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 20:16:10.903631  487175 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/kubeconfig: {Name:mk1ae3aa741bb2460fc0d48f80867b991b4a0677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:16:10.903974  487175 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 20:16:10.904116  487175 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 20:16:10.904211  487175 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-108534"
	I0819 20:16:10.904249  487175 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-108534"
	I0819 20:16:10.904244  487175 addons.go:69] Setting default-storageclass=true in profile "embed-certs-108534"
	I0819 20:16:10.904271  487175 addons.go:69] Setting metrics-server=true in profile "embed-certs-108534"
	I0819 20:16:10.904300  487175 addons.go:234] Setting addon metrics-server=true in "embed-certs-108534"
	W0819 20:16:10.904318  487175 addons.go:243] addon metrics-server should already be in state true
	I0819 20:16:10.904365  487175 host.go:66] Checking if "embed-certs-108534" exists ...
	I0819 20:16:10.904301  487175 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-108534"
	W0819 20:16:10.904265  487175 addons.go:243] addon storage-provisioner should already be in state true
	I0819 20:16:10.904576  487175 host.go:66] Checking if "embed-certs-108534" exists ...
	I0819 20:16:10.904254  487175 config.go:182] Loaded profile config "embed-certs-108534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:16:10.904854  487175 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:16:10.904855  487175 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:16:10.904892  487175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:16:10.904910  487175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:16:10.904951  487175 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:16:10.904976  487175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:16:10.905586  487175 out.go:177] * Verifying Kubernetes components...
	I0819 20:16:10.907022  487175 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:16:10.926365  487175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45885
	I0819 20:16:10.927062  487175 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:16:10.927356  487175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41539
	I0819 20:16:10.927733  487175 main.go:141] libmachine: Using API Version  1
	I0819 20:16:10.927756  487175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:16:10.927831  487175 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:16:10.928165  487175 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:16:10.928375  487175 main.go:141] libmachine: Using API Version  1
	I0819 20:16:10.928398  487175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:16:10.928718  487175 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:16:10.928725  487175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45983
	I0819 20:16:10.928773  487175 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:16:10.928819  487175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:16:10.929043  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetState
	I0819 20:16:10.929311  487175 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:16:10.929880  487175 main.go:141] libmachine: Using API Version  1
	I0819 20:16:10.929900  487175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:16:10.930244  487175 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:16:10.930863  487175 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:16:10.930898  487175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:16:10.932545  487175 addons.go:234] Setting addon default-storageclass=true in "embed-certs-108534"
	W0819 20:16:10.932570  487175 addons.go:243] addon default-storageclass should already be in state true
	I0819 20:16:10.932593  487175 host.go:66] Checking if "embed-certs-108534" exists ...
	I0819 20:16:10.932847  487175 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:16:10.932874  487175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:16:10.951742  487175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0819 20:16:10.952042  487175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33937
	I0819 20:16:10.952245  487175 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:16:10.952585  487175 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:16:10.952799  487175 main.go:141] libmachine: Using API Version  1
	I0819 20:16:10.952824  487175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:16:10.953226  487175 main.go:141] libmachine: Using API Version  1
	I0819 20:16:10.953247  487175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:16:10.953248  487175 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:16:10.953437  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetState
	I0819 20:16:10.953762  487175 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:16:10.954149  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetState
	I0819 20:16:10.955169  487175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41801
	I0819 20:16:10.955695  487175 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:16:10.955786  487175 main.go:141] libmachine: (embed-certs-108534) Calling .DriverName
	I0819 20:16:10.956288  487175 main.go:141] libmachine: (embed-certs-108534) Calling .DriverName
	I0819 20:16:10.956328  487175 main.go:141] libmachine: Using API Version  1
	I0819 20:16:10.956344  487175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:16:10.957333  487175 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:16:10.957956  487175 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 20:16:10.958083  487175 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:16:10.958134  487175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:16:10.958440  487175 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 20:16:10.959332  487175 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 20:16:10.959355  487175 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 20:16:10.959384  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHHostname
	I0819 20:16:10.960754  487175 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 20:16:10.960774  487175 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 20:16:10.960796  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHHostname
	I0819 20:16:10.963376  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:16:10.963726  487175 main.go:141] libmachine: (embed-certs-108534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:6a:92", ip: ""} in network mk-embed-certs-108534: {Iface:virbr4 ExpiryTime:2024-08-19 21:11:03 +0000 UTC Type:0 Mac:52:54:00:60:6a:92 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:embed-certs-108534 Clientid:01:52:54:00:60:6a:92}
	I0819 20:16:10.963762  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined IP address 192.168.72.88 and MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:16:10.964009  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHPort
	I0819 20:16:10.964192  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHKeyPath
	I0819 20:16:10.964327  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHUsername
	I0819 20:16:10.964466  487175 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/embed-certs-108534/id_rsa Username:docker}
	I0819 20:16:10.964785  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:16:10.965283  487175 main.go:141] libmachine: (embed-certs-108534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:6a:92", ip: ""} in network mk-embed-certs-108534: {Iface:virbr4 ExpiryTime:2024-08-19 21:11:03 +0000 UTC Type:0 Mac:52:54:00:60:6a:92 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:embed-certs-108534 Clientid:01:52:54:00:60:6a:92}
	I0819 20:16:10.965310  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined IP address 192.168.72.88 and MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:16:10.965578  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHPort
	I0819 20:16:10.965737  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHKeyPath
	I0819 20:16:10.965819  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHUsername
	I0819 20:16:10.965967  487175 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/embed-certs-108534/id_rsa Username:docker}
	I0819 20:16:10.977486  487175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44225
	I0819 20:16:10.977984  487175 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:16:10.978416  487175 main.go:141] libmachine: Using API Version  1
	I0819 20:16:10.978433  487175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:16:10.978810  487175 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:16:10.978956  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetState
	I0819 20:16:10.980798  487175 main.go:141] libmachine: (embed-certs-108534) Calling .DriverName
	I0819 20:16:10.981046  487175 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 20:16:10.981068  487175 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 20:16:10.981088  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHHostname
	I0819 20:16:10.984160  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:16:10.984643  487175 main.go:141] libmachine: (embed-certs-108534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:6a:92", ip: ""} in network mk-embed-certs-108534: {Iface:virbr4 ExpiryTime:2024-08-19 21:11:03 +0000 UTC Type:0 Mac:52:54:00:60:6a:92 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:embed-certs-108534 Clientid:01:52:54:00:60:6a:92}
	I0819 20:16:10.984671  487175 main.go:141] libmachine: (embed-certs-108534) DBG | domain embed-certs-108534 has defined IP address 192.168.72.88 and MAC address 52:54:00:60:6a:92 in network mk-embed-certs-108534
	I0819 20:16:10.985035  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHPort
	I0819 20:16:10.985246  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHKeyPath
	I0819 20:16:10.985368  487175 main.go:141] libmachine: (embed-certs-108534) Calling .GetSSHUsername
	I0819 20:16:10.985512  487175 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/embed-certs-108534/id_rsa Username:docker}
	I0819 20:16:11.139096  487175 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 20:16:11.158667  487175 node_ready.go:35] waiting up to 6m0s for node "embed-certs-108534" to be "Ready" ...
	I0819 20:16:11.168666  487175 node_ready.go:49] node "embed-certs-108534" has status "Ready":"True"
	I0819 20:16:11.168692  487175 node_ready.go:38] duration metric: took 9.989036ms for node "embed-certs-108534" to be "Ready" ...
	I0819 20:16:11.168704  487175 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 20:16:11.175512  487175 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-65mpt" in "kube-system" namespace to be "Ready" ...
	I0819 20:16:11.218848  487175 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 20:16:11.253247  487175 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 20:16:11.299917  487175 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 20:16:11.299949  487175 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 20:16:11.387490  487175 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 20:16:11.387521  487175 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 20:16:11.454690  487175 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 20:16:11.454724  487175 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 20:16:11.519097  487175 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 20:16:11.865604  487175 main.go:141] libmachine: Making call to close driver server
	I0819 20:16:11.865630  487175 main.go:141] libmachine: (embed-certs-108534) Calling .Close
	I0819 20:16:11.865698  487175 main.go:141] libmachine: Making call to close driver server
	I0819 20:16:11.865728  487175 main.go:141] libmachine: (embed-certs-108534) Calling .Close
	I0819 20:16:11.866070  487175 main.go:141] libmachine: Successfully made call to close driver server
	I0819 20:16:11.866091  487175 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 20:16:11.866102  487175 main.go:141] libmachine: Making call to close driver server
	I0819 20:16:11.866111  487175 main.go:141] libmachine: (embed-certs-108534) Calling .Close
	I0819 20:16:11.866126  487175 main.go:141] libmachine: (embed-certs-108534) DBG | Closing plugin on server side
	I0819 20:16:11.866164  487175 main.go:141] libmachine: Successfully made call to close driver server
	I0819 20:16:11.866171  487175 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 20:16:11.866180  487175 main.go:141] libmachine: Making call to close driver server
	I0819 20:16:11.866193  487175 main.go:141] libmachine: (embed-certs-108534) Calling .Close
	I0819 20:16:11.866360  487175 main.go:141] libmachine: Successfully made call to close driver server
	I0819 20:16:11.866376  487175 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 20:16:11.867681  487175 main.go:141] libmachine: Successfully made call to close driver server
	I0819 20:16:11.867709  487175 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 20:16:11.867709  487175 main.go:141] libmachine: (embed-certs-108534) DBG | Closing plugin on server side
	I0819 20:16:11.892425  487175 main.go:141] libmachine: Making call to close driver server
	I0819 20:16:11.892456  487175 main.go:141] libmachine: (embed-certs-108534) Calling .Close
	I0819 20:16:11.892784  487175 main.go:141] libmachine: Successfully made call to close driver server
	I0819 20:16:11.892805  487175 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 20:16:11.892816  487175 main.go:141] libmachine: (embed-certs-108534) DBG | Closing plugin on server side
	I0819 20:16:12.466878  487175 main.go:141] libmachine: Making call to close driver server
	I0819 20:16:12.466902  487175 main.go:141] libmachine: (embed-certs-108534) Calling .Close
	I0819 20:16:12.467226  487175 main.go:141] libmachine: (embed-certs-108534) DBG | Closing plugin on server side
	I0819 20:16:12.467270  487175 main.go:141] libmachine: Successfully made call to close driver server
	I0819 20:16:12.467294  487175 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 20:16:12.467311  487175 main.go:141] libmachine: Making call to close driver server
	I0819 20:16:12.467323  487175 main.go:141] libmachine: (embed-certs-108534) Calling .Close
	I0819 20:16:12.467603  487175 main.go:141] libmachine: Successfully made call to close driver server
	I0819 20:16:12.467622  487175 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 20:16:12.467636  487175 addons.go:475] Verifying addon metrics-server=true in "embed-certs-108534"
	I0819 20:16:12.469478  487175 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 20:16:11.791230  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:16:13.791559  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:16:12.470869  487175 addons.go:510] duration metric: took 1.566768571s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 20:16:13.183595  487175 pod_ready.go:103] pod "coredns-6f6b679f8f-65mpt" in "kube-system" namespace has status "Ready":"False"
	I0819 20:16:15.182633  487175 pod_ready.go:93] pod "coredns-6f6b679f8f-65mpt" in "kube-system" namespace has status "Ready":"True"
	I0819 20:16:15.182663  487175 pod_ready.go:82] duration metric: took 4.007115203s for pod "coredns-6f6b679f8f-65mpt" in "kube-system" namespace to be "Ready" ...
	I0819 20:16:15.182676  487175 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xp6nv" in "kube-system" namespace to be "Ready" ...
	I0819 20:16:15.189040  487175 pod_ready.go:93] pod "coredns-6f6b679f8f-xp6nv" in "kube-system" namespace has status "Ready":"True"
	I0819 20:16:15.189068  487175 pod_ready.go:82] duration metric: took 6.383222ms for pod "coredns-6f6b679f8f-xp6nv" in "kube-system" namespace to be "Ready" ...
	I0819 20:16:15.189083  487175 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-108534" in "kube-system" namespace to be "Ready" ...
	I0819 20:16:15.195005  487175 pod_ready.go:93] pod "etcd-embed-certs-108534" in "kube-system" namespace has status "Ready":"True"
	I0819 20:16:15.195033  487175 pod_ready.go:82] duration metric: took 5.943132ms for pod "etcd-embed-certs-108534" in "kube-system" namespace to be "Ready" ...
	I0819 20:16:15.195044  487175 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-108534" in "kube-system" namespace to be "Ready" ...
	I0819 20:16:17.202582  487175 pod_ready.go:93] pod "kube-apiserver-embed-certs-108534" in "kube-system" namespace has status "Ready":"True"
	I0819 20:16:17.202609  487175 pod_ready.go:82] duration metric: took 2.007558438s for pod "kube-apiserver-embed-certs-108534" in "kube-system" namespace to be "Ready" ...
	I0819 20:16:17.202620  487175 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-108534" in "kube-system" namespace to be "Ready" ...
	I0819 20:16:17.208950  487175 pod_ready.go:93] pod "kube-controller-manager-embed-certs-108534" in "kube-system" namespace has status "Ready":"True"
	I0819 20:16:17.208978  487175 pod_ready.go:82] duration metric: took 6.351408ms for pod "kube-controller-manager-embed-certs-108534" in "kube-system" namespace to be "Ready" ...
	I0819 20:16:17.208991  487175 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-464sh" in "kube-system" namespace to be "Ready" ...
	I0819 20:16:17.214868  487175 pod_ready.go:93] pod "kube-proxy-464sh" in "kube-system" namespace has status "Ready":"True"
	I0819 20:16:17.214892  487175 pod_ready.go:82] duration metric: took 5.894999ms for pod "kube-proxy-464sh" in "kube-system" namespace to be "Ready" ...
	I0819 20:16:17.214909  487175 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-108534" in "kube-system" namespace to be "Ready" ...
	I0819 20:16:17.580066  487175 pod_ready.go:93] pod "kube-scheduler-embed-certs-108534" in "kube-system" namespace has status "Ready":"True"
	I0819 20:16:17.580095  487175 pod_ready.go:82] duration metric: took 365.17843ms for pod "kube-scheduler-embed-certs-108534" in "kube-system" namespace to be "Ready" ...
	I0819 20:16:17.580105  487175 pod_ready.go:39] duration metric: took 6.411388864s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 20:16:17.580126  487175 api_server.go:52] waiting for apiserver process to appear ...
	I0819 20:16:17.580196  487175 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:16:17.594595  487175 api_server.go:72] duration metric: took 6.690578626s to wait for apiserver process to appear ...
	I0819 20:16:17.594634  487175 api_server.go:88] waiting for apiserver healthz status ...
	I0819 20:16:17.594659  487175 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0819 20:16:17.599332  487175 api_server.go:279] https://192.168.72.88:8443/healthz returned 200:
	ok
	I0819 20:16:17.600255  487175 api_server.go:141] control plane version: v1.31.0
	I0819 20:16:17.600276  487175 api_server.go:131] duration metric: took 5.634877ms to wait for apiserver health ...
	I0819 20:16:17.600284  487175 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 20:16:17.782288  487175 system_pods.go:59] 9 kube-system pods found
	I0819 20:16:17.782321  487175 system_pods.go:61] "coredns-6f6b679f8f-65mpt" [35b95cf7-7572-45f0-8d75-8aaaccc98750] Running
	I0819 20:16:17.782326  487175 system_pods.go:61] "coredns-6f6b679f8f-xp6nv" [c9e56078-bc6f-4ab2-ad68-a26a38f28983] Running
	I0819 20:16:17.782329  487175 system_pods.go:61] "etcd-embed-certs-108534" [56153102-8d8e-48d8-b541-9818a50a8723] Running
	I0819 20:16:17.782333  487175 system_pods.go:61] "kube-apiserver-embed-certs-108534" [eaa72a27-fc7d-4510-9082-5bb812f8804f] Running
	I0819 20:16:17.782336  487175 system_pods.go:61] "kube-controller-manager-embed-certs-108534" [e2256fab-bb31-4e48-92ae-83058511368e] Running
	I0819 20:16:17.782339  487175 system_pods.go:61] "kube-proxy-464sh" [1da322bc-08cb-48b2-bf1a-f6de1988b416] Running
	I0819 20:16:17.782341  487175 system_pods.go:61] "kube-scheduler-embed-certs-108534" [88917fdb-bc10-4fc8-a09a-7ad5e8ff448a] Running
	I0819 20:16:17.782347  487175 system_pods.go:61] "metrics-server-6867b74b74-bvs2v" [5641eae2-725f-4d54-af60-15506c64d76c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 20:16:17.782350  487175 system_pods.go:61] "storage-provisioner" [f487952d-d9e8-43d4-b7f7-38282a59a76d] Running
	I0819 20:16:17.782359  487175 system_pods.go:74] duration metric: took 182.066877ms to wait for pod list to return data ...
	I0819 20:16:17.782366  487175 default_sa.go:34] waiting for default service account to be created ...
	I0819 20:16:17.980107  487175 default_sa.go:45] found service account: "default"
	I0819 20:16:17.980136  487175 default_sa.go:55] duration metric: took 197.763002ms for default service account to be created ...
	I0819 20:16:17.980146  487175 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 20:16:18.181970  487175 system_pods.go:86] 9 kube-system pods found
	I0819 20:16:18.182004  487175 system_pods.go:89] "coredns-6f6b679f8f-65mpt" [35b95cf7-7572-45f0-8d75-8aaaccc98750] Running
	I0819 20:16:18.182009  487175 system_pods.go:89] "coredns-6f6b679f8f-xp6nv" [c9e56078-bc6f-4ab2-ad68-a26a38f28983] Running
	I0819 20:16:18.182013  487175 system_pods.go:89] "etcd-embed-certs-108534" [56153102-8d8e-48d8-b541-9818a50a8723] Running
	I0819 20:16:18.182017  487175 system_pods.go:89] "kube-apiserver-embed-certs-108534" [eaa72a27-fc7d-4510-9082-5bb812f8804f] Running
	I0819 20:16:18.182020  487175 system_pods.go:89] "kube-controller-manager-embed-certs-108534" [e2256fab-bb31-4e48-92ae-83058511368e] Running
	I0819 20:16:18.182024  487175 system_pods.go:89] "kube-proxy-464sh" [1da322bc-08cb-48b2-bf1a-f6de1988b416] Running
	I0819 20:16:18.182027  487175 system_pods.go:89] "kube-scheduler-embed-certs-108534" [88917fdb-bc10-4fc8-a09a-7ad5e8ff448a] Running
	I0819 20:16:18.182033  487175 system_pods.go:89] "metrics-server-6867b74b74-bvs2v" [5641eae2-725f-4d54-af60-15506c64d76c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 20:16:18.182037  487175 system_pods.go:89] "storage-provisioner" [f487952d-d9e8-43d4-b7f7-38282a59a76d] Running
	I0819 20:16:18.182044  487175 system_pods.go:126] duration metric: took 201.89311ms to wait for k8s-apps to be running ...
	I0819 20:16:18.182054  487175 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 20:16:18.182114  487175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:16:18.197227  487175 system_svc.go:56] duration metric: took 15.159651ms WaitForService to wait for kubelet
	I0819 20:16:18.197281  487175 kubeadm.go:582] duration metric: took 7.293267541s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 20:16:18.197310  487175 node_conditions.go:102] verifying NodePressure condition ...
	I0819 20:16:18.380262  487175 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 20:16:18.380288  487175 node_conditions.go:123] node cpu capacity is 2
	I0819 20:16:18.380314  487175 node_conditions.go:105] duration metric: took 182.998377ms to run NodePressure ...
	I0819 20:16:18.380327  487175 start.go:241] waiting for startup goroutines ...
	I0819 20:16:18.380334  487175 start.go:246] waiting for cluster config update ...
	I0819 20:16:18.380344  487175 start.go:255] writing updated cluster config ...
	I0819 20:16:18.380621  487175 ssh_runner.go:195] Run: rm -f paused
	I0819 20:16:18.442758  487175 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 20:16:18.444890  487175 out.go:177] * Done! kubectl is now configured to use "embed-certs-108534" cluster and "default" namespace by default
	I0819 20:16:15.791679  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:16:18.290866  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:16:20.291095  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:16:22.791856  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:16:24.792152  486861 pod_ready.go:103] pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace has status "Ready":"False"
	I0819 20:16:23.889425  487755 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 20:16:23.890156  487755 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 20:16:23.890384  487755 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 20:16:25.785293  486861 pod_ready.go:82] duration metric: took 4m0.0003563s for pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace to be "Ready" ...
	E0819 20:16:25.785329  486861 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-pwvmg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 20:16:25.785373  486861 pod_ready.go:39] duration metric: took 4m9.456720951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 20:16:25.785404  486861 kubeadm.go:597] duration metric: took 4m16.263948636s to restartPrimaryControlPlane
	W0819 20:16:25.785465  486861 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 20:16:25.785495  486861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 20:16:28.890759  487755 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 20:16:28.891073  487755 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 20:16:38.891317  487755 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 20:16:38.891524  487755 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 20:16:52.172792  486861 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.387258103s)
	I0819 20:16:52.172909  486861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:16:52.197679  486861 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 20:16:52.214504  486861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 20:16:52.227218  486861 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 20:16:52.227245  486861 kubeadm.go:157] found existing configuration files:
	
	I0819 20:16:52.227295  486861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 20:16:52.242323  486861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 20:16:52.242407  486861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 20:16:52.257623  486861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 20:16:52.270753  486861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 20:16:52.270822  486861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 20:16:52.293116  486861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 20:16:52.302634  486861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 20:16:52.302715  486861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 20:16:52.312733  486861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 20:16:52.322895  486861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 20:16:52.322963  486861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 20:16:52.333472  486861 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 20:16:52.382152  486861 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 20:16:52.382286  486861 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 20:16:52.487221  486861 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 20:16:52.487375  486861 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 20:16:52.487498  486861 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 20:16:52.496279  486861 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 20:16:52.498290  486861 out.go:235]   - Generating certificates and keys ...
	I0819 20:16:52.498399  486861 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 20:16:52.498498  486861 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 20:16:52.498625  486861 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 20:16:52.498711  486861 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 20:16:52.498803  486861 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 20:16:52.498877  486861 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 20:16:52.498969  486861 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 20:16:52.499058  486861 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 20:16:52.499156  486861 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 20:16:52.499271  486861 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 20:16:52.499340  486861 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 20:16:52.499421  486861 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 20:16:52.617424  486861 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 20:16:52.742912  486861 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 20:16:52.980057  486861 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 20:16:53.260589  486861 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 20:16:53.482182  486861 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 20:16:53.482524  486861 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 20:16:53.485831  486861 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 20:16:53.487841  486861 out.go:235]   - Booting up control plane ...
	I0819 20:16:53.487972  486861 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 20:16:53.488426  486861 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 20:16:53.489352  486861 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 20:16:53.507732  486861 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 20:16:53.515626  486861 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 20:16:53.515694  486861 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 20:16:53.656979  486861 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 20:16:53.657184  486861 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 20:16:54.669729  486861 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.012950798s
	I0819 20:16:54.669850  486861 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 20:16:59.173463  486861 kubeadm.go:310] [api-check] The API server is healthy after 4.502274653s
	I0819 20:16:59.187104  486861 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 20:16:59.205420  486861 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 20:16:59.242101  486861 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 20:16:59.242360  486861 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-944514 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 20:16:59.258890  486861 kubeadm.go:310] [bootstrap-token] Using token: 9a465m.fxc0fmaefvzsobjy
	I0819 20:16:59.260381  486861 out.go:235]   - Configuring RBAC rules ...
	I0819 20:16:59.260554  486861 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 20:16:59.266355  486861 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 20:16:59.278040  486861 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 20:16:59.285560  486861 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 20:16:59.289851  486861 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 20:16:59.294268  486861 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 20:16:59.579846  486861 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 20:17:00.007677  486861 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 20:17:00.583521  486861 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 20:17:00.583557  486861 kubeadm.go:310] 
	I0819 20:17:00.583618  486861 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 20:17:00.583625  486861 kubeadm.go:310] 
	I0819 20:17:00.583732  486861 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 20:17:00.583753  486861 kubeadm.go:310] 
	I0819 20:17:00.583790  486861 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 20:17:00.583883  486861 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 20:17:00.583969  486861 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 20:17:00.583978  486861 kubeadm.go:310] 
	I0819 20:17:00.584066  486861 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 20:17:00.584075  486861 kubeadm.go:310] 
	I0819 20:17:00.584158  486861 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 20:17:00.584182  486861 kubeadm.go:310] 
	I0819 20:17:00.584248  486861 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 20:17:00.584343  486861 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 20:17:00.584421  486861 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 20:17:00.584441  486861 kubeadm.go:310] 
	I0819 20:17:00.584570  486861 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 20:17:00.584691  486861 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 20:17:00.584704  486861 kubeadm.go:310] 
	I0819 20:17:00.584823  486861 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9a465m.fxc0fmaefvzsobjy \
	I0819 20:17:00.584960  486861 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f \
	I0819 20:17:00.584991  486861 kubeadm.go:310] 	--control-plane 
	I0819 20:17:00.585000  486861 kubeadm.go:310] 
	I0819 20:17:00.585097  486861 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 20:17:00.585106  486861 kubeadm.go:310] 
	I0819 20:17:00.585245  486861 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9a465m.fxc0fmaefvzsobjy \
	I0819 20:17:00.585373  486861 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f 
	I0819 20:17:00.585983  486861 kubeadm.go:310] W0819 20:16:52.371612    3047 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 20:17:00.586335  486861 kubeadm.go:310] W0819 20:16:52.372457    3047 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 20:17:00.586465  486861 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 20:17:00.586507  486861 cni.go:84] Creating CNI manager for ""
	I0819 20:17:00.586518  486861 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 20:17:00.587949  486861 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 20:16:58.892392  487755 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 20:16:58.892691  487755 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 20:17:00.589060  486861 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 20:17:00.599909  486861 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 20:17:00.619259  486861 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 20:17:00.619316  486861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:17:00.619383  486861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-944514 minikube.k8s.io/updated_at=2024_08_19T20_17_00_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8 minikube.k8s.io/name=no-preload-944514 minikube.k8s.io/primary=true
	I0819 20:17:00.650802  486861 ops.go:34] apiserver oom_adj: -16
	I0819 20:17:00.796194  486861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:17:01.296326  486861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:17:01.797008  486861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:17:02.296277  486861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:17:02.797099  486861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:17:03.296637  486861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:17:03.796463  486861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:17:04.296616  486861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:17:04.398918  486861 kubeadm.go:1113] duration metric: took 3.779651086s to wait for elevateKubeSystemPrivileges
	I0819 20:17:04.398964  486861 kubeadm.go:394] duration metric: took 4m54.929593133s to StartCluster
	I0819 20:17:04.398990  486861 settings.go:142] acquiring lock: {Name:mkc71def6be9966e5f008a22161bf3ed26f482b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:17:04.399114  486861 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 20:17:04.400872  486861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/kubeconfig: {Name:mk1ae3aa741bb2460fc0d48f80867b991b4a0677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:17:04.401196  486861 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.196 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 20:17:04.401344  486861 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 20:17:04.401455  486861 config.go:182] Loaded profile config "no-preload-944514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:17:04.401463  486861 addons.go:69] Setting storage-provisioner=true in profile "no-preload-944514"
	I0819 20:17:04.401503  486861 addons.go:234] Setting addon storage-provisioner=true in "no-preload-944514"
	I0819 20:17:04.401506  486861 addons.go:69] Setting default-storageclass=true in profile "no-preload-944514"
	W0819 20:17:04.401514  486861 addons.go:243] addon storage-provisioner should already be in state true
	I0819 20:17:04.401533  486861 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-944514"
	I0819 20:17:04.401564  486861 host.go:66] Checking if "no-preload-944514" exists ...
	I0819 20:17:04.401869  486861 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:17:04.401896  486861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:17:04.401950  486861 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:17:04.401979  486861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:17:04.401973  486861 addons.go:69] Setting metrics-server=true in profile "no-preload-944514"
	I0819 20:17:04.402015  486861 addons.go:234] Setting addon metrics-server=true in "no-preload-944514"
	W0819 20:17:04.402029  486861 addons.go:243] addon metrics-server should already be in state true
	I0819 20:17:04.402258  486861 host.go:66] Checking if "no-preload-944514" exists ...
	I0819 20:17:04.402640  486861 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:17:04.402670  486861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:17:04.402879  486861 out.go:177] * Verifying Kubernetes components...
	I0819 20:17:04.404300  486861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:17:04.422046  486861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42251
	I0819 20:17:04.422648  486861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43503
	I0819 20:17:04.422706  486861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46521
	I0819 20:17:04.423110  486861 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:17:04.423204  486861 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:17:04.423652  486861 main.go:141] libmachine: Using API Version  1
	I0819 20:17:04.423673  486861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:17:04.423765  486861 main.go:141] libmachine: Using API Version  1
	I0819 20:17:04.423785  486861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:17:04.424033  486861 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:17:04.424117  486861 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:17:04.424599  486861 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:17:04.424629  486861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:17:04.424685  486861 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:17:04.424718  486861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:17:04.429650  486861 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:17:04.430409  486861 main.go:141] libmachine: Using API Version  1
	I0819 20:17:04.430440  486861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:17:04.430993  486861 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:17:04.431249  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetState
	I0819 20:17:04.435593  486861 addons.go:234] Setting addon default-storageclass=true in "no-preload-944514"
	W0819 20:17:04.435622  486861 addons.go:243] addon default-storageclass should already be in state true
	I0819 20:17:04.435656  486861 host.go:66] Checking if "no-preload-944514" exists ...
	I0819 20:17:04.436032  486861 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:17:04.436066  486861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:17:04.444459  486861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46195
	I0819 20:17:04.445173  486861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40943
	I0819 20:17:04.445430  486861 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:17:04.445824  486861 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:17:04.446210  486861 main.go:141] libmachine: Using API Version  1
	I0819 20:17:04.446230  486861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:17:04.446314  486861 main.go:141] libmachine: Using API Version  1
	I0819 20:17:04.446329  486861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:17:04.446650  486861 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:17:04.446650  486861 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:17:04.446820  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetState
	I0819 20:17:04.446847  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetState
	I0819 20:17:04.448976  486861 main.go:141] libmachine: (no-preload-944514) Calling .DriverName
	I0819 20:17:04.451356  486861 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 20:17:04.451642  486861 main.go:141] libmachine: (no-preload-944514) Calling .DriverName
	I0819 20:17:04.453080  486861 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 20:17:04.453109  486861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 20:17:04.453158  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHHostname
	I0819 20:17:04.453697  486861 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 20:17:04.454818  486861 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 20:17:04.454849  486861 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 20:17:04.454877  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHHostname
	I0819 20:17:04.457335  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:17:04.457973  486861 main.go:141] libmachine: (no-preload-944514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5d:93", ip: ""} in network mk-no-preload-944514: {Iface:virbr1 ExpiryTime:2024-08-19 21:11:42 +0000 UTC Type:0 Mac:52:54:00:b6:5d:93 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-944514 Clientid:01:52:54:00:b6:5d:93}
	I0819 20:17:04.458128  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined IP address 192.168.61.196 and MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:17:04.458671  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHPort
	I0819 20:17:04.459282  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHKeyPath
	I0819 20:17:04.459784  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHUsername
	I0819 20:17:04.459959  486861 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/no-preload-944514/id_rsa Username:docker}
	I0819 20:17:04.461912  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:17:04.462224  486861 main.go:141] libmachine: (no-preload-944514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5d:93", ip: ""} in network mk-no-preload-944514: {Iface:virbr1 ExpiryTime:2024-08-19 21:11:42 +0000 UTC Type:0 Mac:52:54:00:b6:5d:93 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-944514 Clientid:01:52:54:00:b6:5d:93}
	I0819 20:17:04.462246  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined IP address 192.168.61.196 and MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:17:04.462410  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHPort
	I0819 20:17:04.462552  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHKeyPath
	I0819 20:17:04.462699  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHUsername
	I0819 20:17:04.462815  486861 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/no-preload-944514/id_rsa Username:docker}
	I0819 20:17:04.463307  486861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
	I0819 20:17:04.463618  486861 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:17:04.464120  486861 main.go:141] libmachine: Using API Version  1
	I0819 20:17:04.464131  486861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:17:04.464389  486861 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:17:04.464848  486861 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 20:17:04.464866  486861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 20:17:04.480828  486861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35443
	I0819 20:17:04.481585  486861 main.go:141] libmachine: () Calling .GetVersion
	I0819 20:17:04.482097  486861 main.go:141] libmachine: Using API Version  1
	I0819 20:17:04.482110  486861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 20:17:04.482410  486861 main.go:141] libmachine: () Calling .GetMachineName
	I0819 20:17:04.482549  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetState
	I0819 20:17:04.484260  486861 main.go:141] libmachine: (no-preload-944514) Calling .DriverName
	I0819 20:17:04.484437  486861 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 20:17:04.484448  486861 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 20:17:04.484462  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHHostname
	I0819 20:17:04.487426  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:17:04.487996  486861 main.go:141] libmachine: (no-preload-944514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5d:93", ip: ""} in network mk-no-preload-944514: {Iface:virbr1 ExpiryTime:2024-08-19 21:11:42 +0000 UTC Type:0 Mac:52:54:00:b6:5d:93 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-944514 Clientid:01:52:54:00:b6:5d:93}
	I0819 20:17:04.488013  486861 main.go:141] libmachine: (no-preload-944514) DBG | domain no-preload-944514 has defined IP address 192.168.61.196 and MAC address 52:54:00:b6:5d:93 in network mk-no-preload-944514
	I0819 20:17:04.488154  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHPort
	I0819 20:17:04.488316  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHKeyPath
	I0819 20:17:04.488512  486861 main.go:141] libmachine: (no-preload-944514) Calling .GetSSHUsername
	I0819 20:17:04.488670  486861 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/no-preload-944514/id_rsa Username:docker}
	I0819 20:17:04.637669  486861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 20:17:04.653164  486861 node_ready.go:35] waiting up to 6m0s for node "no-preload-944514" to be "Ready" ...
	I0819 20:17:04.662774  486861 node_ready.go:49] node "no-preload-944514" has status "Ready":"True"
	I0819 20:17:04.662800  486861 node_ready.go:38] duration metric: took 9.596067ms for node "no-preload-944514" to be "Ready" ...
	I0819 20:17:04.662809  486861 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 20:17:04.669454  486861 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-944514" in "kube-system" namespace to be "Ready" ...
	I0819 20:17:04.758820  486861 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 20:17:04.758858  486861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 20:17:04.785589  486861 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 20:17:04.785628  486861 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 20:17:04.791137  486861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 20:17:04.807505  486861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 20:17:04.831930  486861 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 20:17:04.831964  486861 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 20:17:04.879234  486861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 20:17:05.548498  486861 main.go:141] libmachine: Making call to close driver server
	I0819 20:17:05.548549  486861 main.go:141] libmachine: (no-preload-944514) Calling .Close
	I0819 20:17:05.548578  486861 main.go:141] libmachine: Making call to close driver server
	I0819 20:17:05.548603  486861 main.go:141] libmachine: (no-preload-944514) Calling .Close
	I0819 20:17:05.549107  486861 main.go:141] libmachine: (no-preload-944514) DBG | Closing plugin on server side
	I0819 20:17:05.549173  486861 main.go:141] libmachine: Successfully made call to close driver server
	I0819 20:17:05.549200  486861 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 20:17:05.549213  486861 main.go:141] libmachine: Making call to close driver server
	I0819 20:17:05.549225  486861 main.go:141] libmachine: (no-preload-944514) Calling .Close
	I0819 20:17:05.549469  486861 main.go:141] libmachine: Successfully made call to close driver server
	I0819 20:17:05.549493  486861 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 20:17:05.549515  486861 main.go:141] libmachine: (no-preload-944514) DBG | Closing plugin on server side
	I0819 20:17:05.548999  486861 main.go:141] libmachine: Successfully made call to close driver server
	I0819 20:17:05.550691  486861 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 20:17:05.550709  486861 main.go:141] libmachine: Making call to close driver server
	I0819 20:17:05.550722  486861 main.go:141] libmachine: (no-preload-944514) Calling .Close
	I0819 20:17:05.551017  486861 main.go:141] libmachine: Successfully made call to close driver server
	I0819 20:17:05.551044  486861 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 20:17:05.590236  486861 main.go:141] libmachine: Making call to close driver server
	I0819 20:17:05.590259  486861 main.go:141] libmachine: (no-preload-944514) Calling .Close
	I0819 20:17:05.590570  486861 main.go:141] libmachine: Successfully made call to close driver server
	I0819 20:17:05.590588  486861 main.go:141] libmachine: (no-preload-944514) DBG | Closing plugin on server side
	I0819 20:17:05.590603  486861 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 20:17:05.816252  486861 main.go:141] libmachine: Making call to close driver server
	I0819 20:17:05.816279  486861 main.go:141] libmachine: (no-preload-944514) Calling .Close
	I0819 20:17:05.816660  486861 main.go:141] libmachine: Successfully made call to close driver server
	I0819 20:17:05.816683  486861 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 20:17:05.816694  486861 main.go:141] libmachine: Making call to close driver server
	I0819 20:17:05.816703  486861 main.go:141] libmachine: (no-preload-944514) Calling .Close
	I0819 20:17:05.817105  486861 main.go:141] libmachine: Successfully made call to close driver server
	I0819 20:17:05.817124  486861 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 20:17:05.817164  486861 addons.go:475] Verifying addon metrics-server=true in "no-preload-944514"
	I0819 20:17:05.818850  486861 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 20:17:05.820543  486861 addons.go:510] duration metric: took 1.419217093s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 20:17:06.677745  486861 pod_ready.go:103] pod "etcd-no-preload-944514" in "kube-system" namespace has status "Ready":"False"
	I0819 20:17:09.178981  486861 pod_ready.go:103] pod "etcd-no-preload-944514" in "kube-system" namespace has status "Ready":"False"
	I0819 20:17:11.675936  486861 pod_ready.go:93] pod "etcd-no-preload-944514" in "kube-system" namespace has status "Ready":"True"
	I0819 20:17:11.675965  486861 pod_ready.go:82] duration metric: took 7.006481173s for pod "etcd-no-preload-944514" in "kube-system" namespace to be "Ready" ...
	I0819 20:17:11.675978  486861 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-944514" in "kube-system" namespace to be "Ready" ...
	I0819 20:17:11.681165  486861 pod_ready.go:93] pod "kube-apiserver-no-preload-944514" in "kube-system" namespace has status "Ready":"True"
	I0819 20:17:11.681189  486861 pod_ready.go:82] duration metric: took 5.203452ms for pod "kube-apiserver-no-preload-944514" in "kube-system" namespace to be "Ready" ...
	I0819 20:17:11.681199  486861 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-944514" in "kube-system" namespace to be "Ready" ...
	I0819 20:17:12.687291  486861 pod_ready.go:93] pod "kube-controller-manager-no-preload-944514" in "kube-system" namespace has status "Ready":"True"
	I0819 20:17:12.687316  486861 pod_ready.go:82] duration metric: took 1.006110356s for pod "kube-controller-manager-no-preload-944514" in "kube-system" namespace to be "Ready" ...
	I0819 20:17:12.687326  486861 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-944514" in "kube-system" namespace to be "Ready" ...
	I0819 20:17:12.692094  486861 pod_ready.go:93] pod "kube-scheduler-no-preload-944514" in "kube-system" namespace has status "Ready":"True"
	I0819 20:17:12.692126  486861 pod_ready.go:82] duration metric: took 4.793696ms for pod "kube-scheduler-no-preload-944514" in "kube-system" namespace to be "Ready" ...
	I0819 20:17:12.692137  486861 pod_ready.go:39] duration metric: took 8.029316122s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 20:17:12.692154  486861 api_server.go:52] waiting for apiserver process to appear ...
	I0819 20:17:12.692216  486861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:17:12.707121  486861 api_server.go:72] duration metric: took 8.30588208s to wait for apiserver process to appear ...
	I0819 20:17:12.707147  486861 api_server.go:88] waiting for apiserver healthz status ...
	I0819 20:17:12.707164  486861 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0819 20:17:12.712421  486861 api_server.go:279] https://192.168.61.196:8443/healthz returned 200:
	ok
	I0819 20:17:12.713418  486861 api_server.go:141] control plane version: v1.31.0
	I0819 20:17:12.713448  486861 api_server.go:131] duration metric: took 6.293638ms to wait for apiserver health ...
	I0819 20:17:12.713458  486861 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 20:17:12.719027  486861 system_pods.go:59] 9 kube-system pods found
	I0819 20:17:12.719066  486861 system_pods.go:61] "coredns-6f6b679f8f-7qrjk" [4ed5914e-d08b-43b9-8ac7-45f83b05b5b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 20:17:12.719072  486861 system_pods.go:61] "coredns-6f6b679f8f-fjxf7" [683eda43-663e-4ad5-89f4-eba2a15e486f] Running
	I0819 20:17:12.719077  486861 system_pods.go:61] "etcd-no-preload-944514" [62485607-c31c-4ede-afa5-d0257ee5c9c3] Running
	I0819 20:17:12.719081  486861 system_pods.go:61] "kube-apiserver-no-preload-944514" [f47c0a83-0fc7-44f8-ab32-cf0cb4f30a7b] Running
	I0819 20:17:12.719085  486861 system_pods.go:61] "kube-controller-manager-no-preload-944514" [4ce21623-31a3-4f21-895a-97319a9b4250] Running
	I0819 20:17:12.719089  486861 system_pods.go:61] "kube-proxy-chcnl" [790c4f20-7342-4522-a870-b06f26a9707d] Running
	I0819 20:17:12.719091  486861 system_pods.go:61] "kube-scheduler-no-preload-944514" [14a38f3c-1800-4831-93a2-f8b128c9840a] Running
	I0819 20:17:12.719096  486861 system_pods.go:61] "metrics-server-6867b74b74-dh566" [a9102ce2-7d4d-4e31-b272-3fa5daf1ca10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 20:17:12.719099  486861 system_pods.go:61] "storage-provisioner" [82b29e2e-117c-4768-b8f1-27f4e572ed0a] Running
	I0819 20:17:12.719106  486861 system_pods.go:74] duration metric: took 5.641672ms to wait for pod list to return data ...
	I0819 20:17:12.719114  486861 default_sa.go:34] waiting for default service account to be created ...
	I0819 20:17:12.722315  486861 default_sa.go:45] found service account: "default"
	I0819 20:17:12.722353  486861 default_sa.go:55] duration metric: took 3.231204ms for default service account to be created ...
	I0819 20:17:12.722365  486861 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 20:17:12.727273  486861 system_pods.go:86] 9 kube-system pods found
	I0819 20:17:12.727308  486861 system_pods.go:89] "coredns-6f6b679f8f-7qrjk" [4ed5914e-d08b-43b9-8ac7-45f83b05b5b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 20:17:12.727315  486861 system_pods.go:89] "coredns-6f6b679f8f-fjxf7" [683eda43-663e-4ad5-89f4-eba2a15e486f] Running
	I0819 20:17:12.727321  486861 system_pods.go:89] "etcd-no-preload-944514" [62485607-c31c-4ede-afa5-d0257ee5c9c3] Running
	I0819 20:17:12.727325  486861 system_pods.go:89] "kube-apiserver-no-preload-944514" [f47c0a83-0fc7-44f8-ab32-cf0cb4f30a7b] Running
	I0819 20:17:12.727329  486861 system_pods.go:89] "kube-controller-manager-no-preload-944514" [4ce21623-31a3-4f21-895a-97319a9b4250] Running
	I0819 20:17:12.727333  486861 system_pods.go:89] "kube-proxy-chcnl" [790c4f20-7342-4522-a870-b06f26a9707d] Running
	I0819 20:17:12.727336  486861 system_pods.go:89] "kube-scheduler-no-preload-944514" [14a38f3c-1800-4831-93a2-f8b128c9840a] Running
	I0819 20:17:12.727341  486861 system_pods.go:89] "metrics-server-6867b74b74-dh566" [a9102ce2-7d4d-4e31-b272-3fa5daf1ca10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 20:17:12.727344  486861 system_pods.go:89] "storage-provisioner" [82b29e2e-117c-4768-b8f1-27f4e572ed0a] Running
	I0819 20:17:12.727352  486861 system_pods.go:126] duration metric: took 4.980295ms to wait for k8s-apps to be running ...
	I0819 20:17:12.727358  486861 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 20:17:12.727404  486861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:17:12.741553  486861 system_svc.go:56] duration metric: took 14.183111ms WaitForService to wait for kubelet
	I0819 20:17:12.741584  486861 kubeadm.go:582] duration metric: took 8.340349711s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 20:17:12.741606  486861 node_conditions.go:102] verifying NodePressure condition ...
	I0819 20:17:12.875023  486861 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 20:17:12.875055  486861 node_conditions.go:123] node cpu capacity is 2
	I0819 20:17:12.875067  486861 node_conditions.go:105] duration metric: took 133.456812ms to run NodePressure ...
	I0819 20:17:12.875080  486861 start.go:241] waiting for startup goroutines ...
	I0819 20:17:12.875087  486861 start.go:246] waiting for cluster config update ...
	I0819 20:17:12.875097  486861 start.go:255] writing updated cluster config ...
	I0819 20:17:12.875411  486861 ssh_runner.go:195] Run: rm -f paused
	I0819 20:17:12.928010  486861 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 20:17:12.929593  486861 out.go:177] * Done! kubectl is now configured to use "no-preload-944514" cluster and "default" namespace by default
	I0819 20:17:38.894748  487755 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 20:17:38.895081  487755 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 20:17:38.895104  487755 kubeadm.go:310] 
	I0819 20:17:38.895155  487755 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 20:17:38.895216  487755 kubeadm.go:310] 		timed out waiting for the condition
	I0819 20:17:38.895244  487755 kubeadm.go:310] 
	I0819 20:17:38.895293  487755 kubeadm.go:310] 	This error is likely caused by:
	I0819 20:17:38.895371  487755 kubeadm.go:310] 		- The kubelet is not running
	I0819 20:17:38.895550  487755 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 20:17:38.895562  487755 kubeadm.go:310] 
	I0819 20:17:38.895717  487755 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 20:17:38.895781  487755 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 20:17:38.895840  487755 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 20:17:38.895850  487755 kubeadm.go:310] 
	I0819 20:17:38.895984  487755 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 20:17:38.896105  487755 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 20:17:38.896120  487755 kubeadm.go:310] 
	I0819 20:17:38.896289  487755 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 20:17:38.896410  487755 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 20:17:38.896550  487755 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 20:17:38.896655  487755 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 20:17:38.896677  487755 kubeadm.go:310] 
	I0819 20:17:38.896832  487755 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 20:17:38.896950  487755 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 20:17:38.897076  487755 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 20:17:38.897203  487755 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 20:17:38.897256  487755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 20:17:39.361580  487755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:17:39.378440  487755 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 20:17:39.388889  487755 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 20:17:39.388909  487755 kubeadm.go:157] found existing configuration files:
	
	I0819 20:17:39.388964  487755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 20:17:39.398710  487755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 20:17:39.398779  487755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 20:17:39.409094  487755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 20:17:39.419194  487755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 20:17:39.419268  487755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 20:17:39.429683  487755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 20:17:39.439777  487755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 20:17:39.439866  487755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 20:17:39.449944  487755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 20:17:39.459737  487755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 20:17:39.459821  487755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 20:17:39.470140  487755 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 20:17:39.681557  487755 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 20:18:15.531194  486208 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.001060888s
	I0819 20:18:15.531257  486208 kubeadm.go:310] 
	I0819 20:18:15.531300  486208 kubeadm.go:310] Unfortunately, an error has occurred:
	I0819 20:18:15.531334  486208 kubeadm.go:310] 	context deadline exceeded
	I0819 20:18:15.531342  486208 kubeadm.go:310] 
	I0819 20:18:15.531369  486208 kubeadm.go:310] This error is likely caused by:
	I0819 20:18:15.531397  486208 kubeadm.go:310] 	- The kubelet is not running
	I0819 20:18:15.531566  486208 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 20:18:15.531605  486208 kubeadm.go:310] 
	I0819 20:18:15.531751  486208 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 20:18:15.531802  486208 kubeadm.go:310] 	- 'systemctl status kubelet'
	I0819 20:18:15.531848  486208 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I0819 20:18:15.531859  486208 kubeadm.go:310] 
	I0819 20:18:15.531999  486208 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 20:18:15.532126  486208 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 20:18:15.532241  486208 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0819 20:18:15.532353  486208 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 20:18:15.532447  486208 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0819 20:18:15.532549  486208 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I0819 20:18:15.533422  486208 kubeadm.go:310] W0819 20:14:13.497859   10596 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 20:18:15.533728  486208 kubeadm.go:310] W0819 20:14:13.498612   10596 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 20:18:15.533912  486208 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 20:18:15.534030  486208 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I0819 20:18:15.534141  486208 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 20:18:15.534227  486208 kubeadm.go:394] duration metric: took 12m8.663928628s to StartCluster
	I0819 20:18:15.534275  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:18:15.534329  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:18:15.571645  486208 cri.go:89] found id: "ff34b36702bacbf1f7ec07d28b47bdae7e8d926ab8c3d90e5810c62a26458d36"
	I0819 20:18:15.571674  486208 cri.go:89] found id: ""
	I0819 20:18:15.571685  486208 logs.go:276] 1 containers: [ff34b36702bacbf1f7ec07d28b47bdae7e8d926ab8c3d90e5810c62a26458d36]
	I0819 20:18:15.571754  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:18:15.576467  486208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:18:15.576548  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:18:15.619531  486208 cri.go:89] found id: ""
	I0819 20:18:15.619556  486208 logs.go:276] 0 containers: []
	W0819 20:18:15.619565  486208 logs.go:278] No container was found matching "etcd"
	I0819 20:18:15.619572  486208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:18:15.619624  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:18:15.655116  486208 cri.go:89] found id: ""
	I0819 20:18:15.655158  486208 logs.go:276] 0 containers: []
	W0819 20:18:15.655170  486208 logs.go:278] No container was found matching "coredns"
	I0819 20:18:15.655178  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:18:15.655248  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:18:15.709960  486208 cri.go:89] found id: "fe6b0d0fdc1dfb4cce635d9e4719bcdfd0b7d77b5bc444fe8fd8a1dc9403d7cd"
	I0819 20:18:15.709992  486208 cri.go:89] found id: ""
	I0819 20:18:15.710003  486208 logs.go:276] 1 containers: [fe6b0d0fdc1dfb4cce635d9e4719bcdfd0b7d77b5bc444fe8fd8a1dc9403d7cd]
	I0819 20:18:15.710065  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:18:15.714325  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:18:15.714409  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:18:15.751550  486208 cri.go:89] found id: ""
	I0819 20:18:15.751586  486208 logs.go:276] 0 containers: []
	W0819 20:18:15.751595  486208 logs.go:278] No container was found matching "kube-proxy"
	I0819 20:18:15.751602  486208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:18:15.751660  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:18:15.787140  486208 cri.go:89] found id: "4a55d1c0a2f3bf48094dce144098c79f421a6085e738f9eea8021acbcced3aac"
	I0819 20:18:15.787165  486208 cri.go:89] found id: ""
	I0819 20:18:15.787174  486208 logs.go:276] 1 containers: [4a55d1c0a2f3bf48094dce144098c79f421a6085e738f9eea8021acbcced3aac]
	I0819 20:18:15.787231  486208 ssh_runner.go:195] Run: which crictl
	I0819 20:18:15.791370  486208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:18:15.791454  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:18:15.825822  486208 cri.go:89] found id: ""
	I0819 20:18:15.825867  486208 logs.go:276] 0 containers: []
	W0819 20:18:15.825880  486208 logs.go:278] No container was found matching "kindnet"
	I0819 20:18:15.825890  486208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 20:18:15.825969  486208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 20:18:15.861456  486208 cri.go:89] found id: ""
	I0819 20:18:15.861485  486208 logs.go:276] 0 containers: []
	W0819 20:18:15.861494  486208 logs.go:278] No container was found matching "storage-provisioner"
	I0819 20:18:15.861504  486208 logs.go:123] Gathering logs for kube-controller-manager [4a55d1c0a2f3bf48094dce144098c79f421a6085e738f9eea8021acbcced3aac] ...
	I0819 20:18:15.861519  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a55d1c0a2f3bf48094dce144098c79f421a6085e738f9eea8021acbcced3aac"
	I0819 20:18:15.900821  486208 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:18:15.900852  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:18:16.126415  486208 logs.go:123] Gathering logs for container status ...
	I0819 20:18:16.126463  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:18:16.170985  486208 logs.go:123] Gathering logs for kubelet ...
	I0819 20:18:16.171031  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:18:16.311690  486208 logs.go:123] Gathering logs for dmesg ...
	I0819 20:18:16.311740  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:18:16.326693  486208 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:18:16.326731  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 20:18:16.396125  486208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 20:18:16.396160  486208 logs.go:123] Gathering logs for kube-apiserver [ff34b36702bacbf1f7ec07d28b47bdae7e8d926ab8c3d90e5810c62a26458d36] ...
	I0819 20:18:16.396178  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff34b36702bacbf1f7ec07d28b47bdae7e8d926ab8c3d90e5810c62a26458d36"
	I0819 20:18:16.435339  486208 logs.go:123] Gathering logs for kube-scheduler [fe6b0d0fdc1dfb4cce635d9e4719bcdfd0b7d77b5bc444fe8fd8a1dc9403d7cd] ...
	I0819 20:18:16.435376  486208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe6b0d0fdc1dfb4cce635d9e4719bcdfd0b7d77b5bc444fe8fd8a1dc9403d7cd"
	W0819 20:18:16.513916  486208 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.942981ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.001060888s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W0819 20:14:13.497859   10596 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W0819 20:14:13.498612   10596 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 20:18:16.513996  486208 out.go:270] * 
	W0819 20:18:16.514056  486208 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.942981ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.001060888s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W0819 20:14:13.497859   10596 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W0819 20:14:13.498612   10596 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 20:18:16.514070  486208 out.go:270] * 
	W0819 20:18:16.514825  486208 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 20:18:16.518530  486208 out.go:201] 
	W0819 20:18:16.519851  486208 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.942981ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.001060888s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W0819 20:14:13.497859   10596 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W0819 20:14:13.498612   10596 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 20:18:16.519904  486208 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 20:18:16.519925  486208 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 20:18:16.521337  486208 out.go:201] 
	
	
	==> CRI-O <==
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.406429573Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724098698406402906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c1f210f-f827-4e77-9ce8-9e6ef2d1dafe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.407134127Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=405b2540-cedc-47f1-8f3b-ae5cd58e9e3f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.407183322Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=405b2540-cedc-47f1-8f3b-ae5cd58e9e3f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.407274579Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a55d1c0a2f3bf48094dce144098c79f421a6085e738f9eea8021acbcced3aac,PodSandboxId:944a5086a9750d40d783c36c06e47834e65b2898bdd0b200ed43db93d7c3e167,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724098639356929761,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-382787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f84c60b360b7b53505ee26807f776ee,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.c
ontainer.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff34b36702bacbf1f7ec07d28b47bdae7e8d926ab8c3d90e5810c62a26458d36,PodSandboxId:dd03a0cb7e33c0f9176e30a7e1d2705075ecbbd30e0d8bcfe01a5bcf8b05491d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724098624357498004,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-382787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc8a129bb7e9ca924a43b856f53a243,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.conta
iner.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe6b0d0fdc1dfb4cce635d9e4719bcdfd0b7d77b5bc444fe8fd8a1dc9403d7cd,PodSandboxId:1d0ae19b4389f60d81c1ae388a667b080e6b9e07d27e2adb3ed67530d54e3256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724098455942578932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-382787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a1749d95b28c1010fa131d38532a49c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container
.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=405b2540-cedc-47f1-8f3b-ae5cd58e9e3f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.440244755Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f01e24a2-2c8a-42a9-9fb8-dd2d559aa772 name=/runtime.v1.RuntimeService/Version
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.440332249Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f01e24a2-2c8a-42a9-9fb8-dd2d559aa772 name=/runtime.v1.RuntimeService/Version
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.441353801Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=973d46d2-0bd3-4ad0-a185-c4a5dfee376b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.441733113Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724098698441710994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=973d46d2-0bd3-4ad0-a185-c4a5dfee376b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.442310131Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00b76757-5094-45e0-8e45-84ffc30a26e3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.442380057Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00b76757-5094-45e0-8e45-84ffc30a26e3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.442471821Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a55d1c0a2f3bf48094dce144098c79f421a6085e738f9eea8021acbcced3aac,PodSandboxId:944a5086a9750d40d783c36c06e47834e65b2898bdd0b200ed43db93d7c3e167,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724098639356929761,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-382787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f84c60b360b7b53505ee26807f776ee,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.c
ontainer.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff34b36702bacbf1f7ec07d28b47bdae7e8d926ab8c3d90e5810c62a26458d36,PodSandboxId:dd03a0cb7e33c0f9176e30a7e1d2705075ecbbd30e0d8bcfe01a5bcf8b05491d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724098624357498004,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-382787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc8a129bb7e9ca924a43b856f53a243,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.conta
iner.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe6b0d0fdc1dfb4cce635d9e4719bcdfd0b7d77b5bc444fe8fd8a1dc9403d7cd,PodSandboxId:1d0ae19b4389f60d81c1ae388a667b080e6b9e07d27e2adb3ed67530d54e3256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724098455942578932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-382787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a1749d95b28c1010fa131d38532a49c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container
.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00b76757-5094-45e0-8e45-84ffc30a26e3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.482591308Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e71929a-5c46-434d-b889-263536c2e434 name=/runtime.v1.RuntimeService/Version
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.482694688Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e71929a-5c46-434d-b889-263536c2e434 name=/runtime.v1.RuntimeService/Version
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.484084304Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c108a62c-fbb1-4d53-9283-a7eb3858c1aa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.484692393Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724098698484662606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c108a62c-fbb1-4d53-9283-a7eb3858c1aa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.485245782Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf802a70-5830-4727-95cb-e3060a43612a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.485327763Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf802a70-5830-4727-95cb-e3060a43612a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.485456787Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a55d1c0a2f3bf48094dce144098c79f421a6085e738f9eea8021acbcced3aac,PodSandboxId:944a5086a9750d40d783c36c06e47834e65b2898bdd0b200ed43db93d7c3e167,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724098639356929761,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-382787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f84c60b360b7b53505ee26807f776ee,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.c
ontainer.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff34b36702bacbf1f7ec07d28b47bdae7e8d926ab8c3d90e5810c62a26458d36,PodSandboxId:dd03a0cb7e33c0f9176e30a7e1d2705075ecbbd30e0d8bcfe01a5bcf8b05491d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724098624357498004,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-382787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc8a129bb7e9ca924a43b856f53a243,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.conta
iner.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe6b0d0fdc1dfb4cce635d9e4719bcdfd0b7d77b5bc444fe8fd8a1dc9403d7cd,PodSandboxId:1d0ae19b4389f60d81c1ae388a667b080e6b9e07d27e2adb3ed67530d54e3256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724098455942578932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-382787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a1749d95b28c1010fa131d38532a49c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container
.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf802a70-5830-4727-95cb-e3060a43612a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.518139925Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c7b2ec2c-1fce-4176-a4a4-b7aab84a6c0e name=/runtime.v1.RuntimeService/Version
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.518227467Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c7b2ec2c-1fce-4176-a4a4-b7aab84a6c0e name=/runtime.v1.RuntimeService/Version
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.519330152Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d956c4dc-efde-4e0f-ab68-1dfe6f5d4676 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.519719488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724098698519695752,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d956c4dc-efde-4e0f-ab68-1dfe6f5d4676 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.520378131Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1213053d-5174-4be5-b880-ff078e17d1c2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.520448280Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1213053d-5174-4be5-b880-ff078e17d1c2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 20:18:18 kubernetes-upgrade-382787 crio[3223]: time="2024-08-19 20:18:18.520546767Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a55d1c0a2f3bf48094dce144098c79f421a6085e738f9eea8021acbcced3aac,PodSandboxId:944a5086a9750d40d783c36c06e47834e65b2898bdd0b200ed43db93d7c3e167,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724098639356929761,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-382787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f84c60b360b7b53505ee26807f776ee,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.c
ontainer.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff34b36702bacbf1f7ec07d28b47bdae7e8d926ab8c3d90e5810c62a26458d36,PodSandboxId:dd03a0cb7e33c0f9176e30a7e1d2705075ecbbd30e0d8bcfe01a5bcf8b05491d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724098624357498004,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-382787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc8a129bb7e9ca924a43b856f53a243,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.conta
iner.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe6b0d0fdc1dfb4cce635d9e4719bcdfd0b7d77b5bc444fe8fd8a1dc9403d7cd,PodSandboxId:1d0ae19b4389f60d81c1ae388a667b080e6b9e07d27e2adb3ed67530d54e3256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724098455942578932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-382787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a1749d95b28c1010fa131d38532a49c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container
.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1213053d-5174-4be5-b880-ff078e17d1c2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4a55d1c0a2f3b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   59 seconds ago       Exited              kube-controller-manager   15                  944a5086a9750       kube-controller-manager-kubernetes-upgrade-382787
	ff34b36702bac       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   About a minute ago   Exited              kube-apiserver            15                  dd03a0cb7e33c       kube-apiserver-kubernetes-upgrade-382787
	fe6b0d0fdc1df       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   4 minutes ago        Running             kube-scheduler            4                   1d0ae19b4389f       kube-scheduler-kubernetes-upgrade-382787
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +4.913725] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.060855] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056448] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.195592] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.133602] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.298489] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[Aug19 20:04] systemd-fstab-generator[732]: Ignoring "noauto" option for root device
	[  +2.094081] systemd-fstab-generator[854]: Ignoring "noauto" option for root device
	[  +0.061275] kauditd_printk_skb: 158 callbacks suppressed
	[ +13.212251] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	[  +0.094141] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.057912] kauditd_printk_skb: 112 callbacks suppressed
	[  +0.818040] systemd-fstab-generator[2706]: Ignoring "noauto" option for root device
	[  +0.254660] systemd-fstab-generator[2824]: Ignoring "noauto" option for root device
	[  +0.255110] systemd-fstab-generator[2871]: Ignoring "noauto" option for root device
	[  +0.194446] systemd-fstab-generator[2903]: Ignoring "noauto" option for root device
	[  +0.484401] systemd-fstab-generator[3044]: Ignoring "noauto" option for root device
	[Aug19 20:06] systemd-fstab-generator[3357]: Ignoring "noauto" option for root device
	[  +0.089474] kauditd_printk_skb: 208 callbacks suppressed
	[  +2.461603] systemd-fstab-generator[4052]: Ignoring "noauto" option for root device
	[ +22.456780] kauditd_printk_skb: 135 callbacks suppressed
	[Aug19 20:10] systemd-fstab-generator[9667]: Ignoring "noauto" option for root device
	[ +22.729773] kauditd_printk_skb: 81 callbacks suppressed
	[Aug19 20:14] systemd-fstab-generator[10623]: Ignoring "noauto" option for root device
	[ +22.599114] kauditd_printk_skb: 54 callbacks suppressed
	
	
	==> kernel <==
	 20:18:18 up 14 min,  0 users,  load average: 0.17, 0.12, 0.09
	Linux kubernetes-upgrade-382787 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ff34b36702bacbf1f7ec07d28b47bdae7e8d926ab8c3d90e5810c62a26458d36] <==
	I0819 20:17:04.537242       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 20:17:05.050012       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0819 20:17:05.050387       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 20:17:05.050590       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0819 20:17:05.054628       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 20:17:05.058221       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 20:17:05.058245       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 20:17:05.058445       1 instance.go:232] Using reconciler: lease
	W0819 20:17:05.059485       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 20:17:06.051899       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 20:17:06.051950       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 20:17:06.060917       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 20:17:07.546375       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 20:17:07.806475       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 20:17:07.849065       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 20:17:09.706346       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 20:17:10.452181       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 20:17:10.706545       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 20:17:13.507716       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 20:17:14.340884       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 20:17:14.411014       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 20:17:19.590425       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 20:17:21.058500       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 20:17:21.468789       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0819 20:17:25.059352       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [4a55d1c0a2f3bf48094dce144098c79f421a6085e738f9eea8021acbcced3aac] <==
	I0819 20:17:19.781497       1 serving.go:386] Generated self-signed cert in-memory
	I0819 20:17:20.117689       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 20:17:20.117793       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 20:17:20.119546       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 20:17:20.119727       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 20:17:20.119940       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 20:17:20.120280       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0819 20:17:36.065929       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.50.10:8443/healthz\": dial tcp 192.168.50.10:8443: connect: connection refused"
	
	
	==> kube-scheduler [fe6b0d0fdc1dfb4cce635d9e4719bcdfd0b7d77b5bc444fe8fd8a1dc9403d7cd] <==
	E0819 20:17:35.061878       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.50.10:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.50.10:8443: connect: connection refused" logger="UnhandledError"
	W0819 20:17:35.475392       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.50.10:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.50.10:8443: connect: connection refused
	E0819 20:17:35.475451       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.50.10:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.50.10:8443: connect: connection refused" logger="UnhandledError"
	W0819 20:17:36.963338       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.50.10:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.10:8443: connect: connection refused
	E0819 20:17:36.963402       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.50.10:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.50.10:8443: connect: connection refused" logger="UnhandledError"
	W0819 20:17:39.571807       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.50.10:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.50.10:8443: connect: connection refused
	E0819 20:17:39.571872       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.50.10:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.50.10:8443: connect: connection refused" logger="UnhandledError"
	W0819 20:17:52.844704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.50.10:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.50.10:8443: connect: connection refused
	E0819 20:17:52.844751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.50.10:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.50.10:8443: connect: connection refused" logger="UnhandledError"
	W0819 20:18:00.887876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.50.10:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.50.10:8443: connect: connection refused
	E0819 20:18:00.887931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.50.10:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.50.10:8443: connect: connection refused" logger="UnhandledError"
	W0819 20:18:01.043428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.50.10:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.10:8443: connect: connection refused
	E0819 20:18:01.043472       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.50.10:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.50.10:8443: connect: connection refused" logger="UnhandledError"
	W0819 20:18:03.020592       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.50.10:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.50.10:8443: connect: connection refused
	E0819 20:18:03.020645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.50.10:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.50.10:8443: connect: connection refused" logger="UnhandledError"
	W0819 20:18:03.729459       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.50.10:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.50.10:8443: connect: connection refused
	E0819 20:18:03.729528       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.50.10:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.50.10:8443: connect: connection refused" logger="UnhandledError"
	W0819 20:18:05.658568       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.50.10:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.50.10:8443: connect: connection refused
	E0819 20:18:05.658612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.50.10:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.50.10:8443: connect: connection refused" logger="UnhandledError"
	W0819 20:18:08.904818       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.50.10:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.50.10:8443: connect: connection refused
	E0819 20:18:08.904866       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.50.10:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.50.10:8443: connect: connection refused" logger="UnhandledError"
	W0819 20:18:15.013908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.50.10:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.10:8443: connect: connection refused
	E0819 20:18:15.013954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.50.10:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.50.10:8443: connect: connection refused" logger="UnhandledError"
	W0819 20:18:18.115902       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.50.10:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.50.10:8443: connect: connection refused
	E0819 20:18:18.115960       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.50.10:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.50.10:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Aug 19 20:18:07 kubernetes-upgrade-382787 kubelet[10630]: E0819 20:18:07.348564   10630 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_etcd_etcd-kubernetes-upgrade-382787_kube-system_f002b8e96f5ef1767c08d20fee5207af_1\" is already in use by ec0d39310a198ae57d82ac07eed5b8fcc8c4d3b0339bf37a211e8108390f8ac3. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="38c4e31d26291659420299c91c11c53260ab60eb7f4e0e18d2cb6c752d0a14ea"
	Aug 19 20:18:07 kubernetes-upgrade-382787 kubelet[10630]: E0819 20:18:07.348734   10630 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:etcd,Image:registry.k8s.io/etcd:3.5.15-0,Command:[etcd --advertise-client-urls=https://192.168.50.10:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/minikube/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://192.168.50.10:2380 --initial-cluster=kubernetes-upgrade-382787=https://192.168.50.10:2380 --key-file=/var/lib/minikube/certs/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.50.10:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.50.10:2380 --name=kubernetes-upgrade-382787 --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/var/lib/minikube/certs/etcd/peer.key --peer-trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt --proxy-refresh-interval=70000 --snapshot-count=10000 --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{104857600 0} {<nil>} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etcd-data,ReadOnly:false,MountPath:/var/lib/minikube/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-certs,ReadOnly:false,MountPath:/var/lib/minikube/certs/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,}
,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-kubernetes-upgrade-382787_kube-system(f002b8e96f5ef1767c08d20fee5207
af): CreateContainerError: the container name \"k8s_etcd_etcd-kubernetes-upgrade-382787_kube-system_f002b8e96f5ef1767c08d20fee5207af_1\" is already in use by ec0d39310a198ae57d82ac07eed5b8fcc8c4d3b0339bf37a211e8108390f8ac3. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Aug 19 20:18:07 kubernetes-upgrade-382787 kubelet[10630]: E0819 20:18:07.350192   10630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-kubernetes-upgrade-382787_kube-system_f002b8e96f5ef1767c08d20fee5207af_1\\\" is already in use by ec0d39310a198ae57d82ac07eed5b8fcc8c4d3b0339bf37a211e8108390f8ac3. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-kubernetes-upgrade-382787" podUID="f002b8e96f5ef1767c08d20fee5207af"
	Aug 19 20:18:08 kubernetes-upgrade-382787 kubelet[10630]: E0819 20:18:08.074906   10630 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-382787?timeout=10s\": dial tcp 192.168.50.10:8443: connect: connection refused" interval="7s"
	Aug 19 20:18:08 kubernetes-upgrade-382787 kubelet[10630]: W0819 20:18:08.157543   10630 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.10:8443: connect: connection refused
	Aug 19 20:18:08 kubernetes-upgrade-382787 kubelet[10630]: E0819 20:18:08.157653   10630 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.50.10:8443: connect: connection refused" logger="UnhandledError"
	Aug 19 20:18:10 kubernetes-upgrade-382787 kubelet[10630]: E0819 20:18:10.060707   10630 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.50.10:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-382787.17ed3a6916b625bf  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-382787,UID:kubernetes-upgrade-382787,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node kubernetes-upgrade-382787 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-382787,},FirstTimestamp:2024-08-19 20:14:15.379781055 +0000 UTC m=+0.398232180,LastTimestamp:2024-08-19 20:14:15.379781055 +0000 UTC m=+0.398232180,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,
ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-382787,}"
	Aug 19 20:18:10 kubernetes-upgrade-382787 kubelet[10630]: I0819 20:18:10.161070   10630 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-382787"
	Aug 19 20:18:10 kubernetes-upgrade-382787 kubelet[10630]: E0819 20:18:10.162232   10630 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.10:8443: connect: connection refused" node="kubernetes-upgrade-382787"
	Aug 19 20:18:10 kubernetes-upgrade-382787 kubelet[10630]: I0819 20:18:10.342175   10630 scope.go:117] "RemoveContainer" containerID="4a55d1c0a2f3bf48094dce144098c79f421a6085e738f9eea8021acbcced3aac"
	Aug 19 20:18:10 kubernetes-upgrade-382787 kubelet[10630]: E0819 20:18:10.342318   10630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-382787_kube-system(8f84c60b360b7b53505ee26807f776ee)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-382787" podUID="8f84c60b360b7b53505ee26807f776ee"
	Aug 19 20:18:12 kubernetes-upgrade-382787 kubelet[10630]: W0819 20:18:12.238712   10630 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-382787&limit=500&resourceVersion=0": dial tcp 192.168.50.10:8443: connect: connection refused
	Aug 19 20:18:12 kubernetes-upgrade-382787 kubelet[10630]: E0819 20:18:12.239167   10630 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-382787&limit=500&resourceVersion=0\": dial tcp 192.168.50.10:8443: connect: connection refused" logger="UnhandledError"
	Aug 19 20:18:14 kubernetes-upgrade-382787 kubelet[10630]: I0819 20:18:14.342989   10630 scope.go:117] "RemoveContainer" containerID="ff34b36702bacbf1f7ec07d28b47bdae7e8d926ab8c3d90e5810c62a26458d36"
	Aug 19 20:18:14 kubernetes-upgrade-382787 kubelet[10630]: E0819 20:18:14.343250   10630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-382787_kube-system(fdc8a129bb7e9ca924a43b856f53a243)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-382787" podUID="fdc8a129bb7e9ca924a43b856f53a243"
	Aug 19 20:18:15 kubernetes-upgrade-382787 kubelet[10630]: E0819 20:18:15.076366   10630 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-382787?timeout=10s\": dial tcp 192.168.50.10:8443: connect: connection refused" interval="7s"
	Aug 19 20:18:15 kubernetes-upgrade-382787 kubelet[10630]: E0819 20:18:15.373919   10630 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 20:18:15 kubernetes-upgrade-382787 kubelet[10630]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 20:18:15 kubernetes-upgrade-382787 kubelet[10630]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 20:18:15 kubernetes-upgrade-382787 kubelet[10630]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 20:18:15 kubernetes-upgrade-382787 kubelet[10630]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 20:18:15 kubernetes-upgrade-382787 kubelet[10630]: E0819 20:18:15.443372   10630 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724098695442948420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:18:15 kubernetes-upgrade-382787 kubelet[10630]: E0819 20:18:15.443432   10630 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724098695442948420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:18:17 kubernetes-upgrade-382787 kubelet[10630]: I0819 20:18:17.164277   10630 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-382787"
	Aug 19 20:18:17 kubernetes-upgrade-382787 kubelet[10630]: E0819 20:18:17.165162   10630 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.10:8443: connect: connection refused" node="kubernetes-upgrade-382787"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-382787 -n kubernetes-upgrade-382787
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-382787 -n kubernetes-upgrade-382787: exit status 2 (241.12279ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-382787" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-382787" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-382787
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-382787: (1.149592216s)
--- FAIL: TestKubernetesUpgrade (1147.80s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (44.64s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-232147 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-232147 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.457067144s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-232147] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-232147" primary control-plane node in "pause-232147" cluster
	* Updating the running kvm2 "pause-232147" VM ...
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-232147" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:58:28.442467  481208 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:58:28.442882  481208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:58:28.442895  481208 out.go:358] Setting ErrFile to fd 2...
	I0819 19:58:28.442902  481208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:58:28.443257  481208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:58:28.444022  481208 out.go:352] Setting JSON to false
	I0819 19:58:28.445427  481208 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13259,"bootTime":1724084249,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:58:28.445516  481208 start.go:139] virtualization: kvm guest
	I0819 19:58:28.598957  481208 out.go:177] * [pause-232147] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:58:28.713722  481208 notify.go:220] Checking for updates...
	I0819 19:58:28.713778  481208 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 19:58:28.715091  481208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:58:28.716284  481208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:58:28.717917  481208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:58:28.719597  481208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:58:28.721203  481208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:58:28.723040  481208 config.go:182] Loaded profile config "pause-232147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:58:28.723648  481208 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:28.723713  481208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:28.746101  481208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38895
	I0819 19:58:28.746707  481208 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:28.747485  481208 main.go:141] libmachine: Using API Version  1
	I0819 19:58:28.747517  481208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:28.748096  481208 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:28.748361  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:28.748754  481208 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 19:58:28.749246  481208 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:28.749306  481208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:28.771424  481208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35221
	I0819 19:58:28.772266  481208 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:28.772961  481208 main.go:141] libmachine: Using API Version  1
	I0819 19:58:28.772984  481208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:28.773458  481208 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:28.773674  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:28.817297  481208 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 19:58:28.818617  481208 start.go:297] selected driver: kvm2
	I0819 19:58:28.818692  481208 start.go:901] validating driver "kvm2" against &{Name:pause-232147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.0 ClusterName:pause-232147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:58:28.818917  481208 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:58:28.819460  481208 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:58:28.819611  481208 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:58:28.842758  481208 install.go:137] /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:58:28.843460  481208 cni.go:84] Creating CNI manager for ""
	I0819 19:58:28.843468  481208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:58:28.843525  481208 start.go:340] cluster config:
	{Name:pause-232147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-232147 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:58:28.843709  481208 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:58:28.845317  481208 out.go:177] * Starting "pause-232147" primary control-plane node in "pause-232147" cluster
	I0819 19:58:28.846118  481208 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:58:28.846165  481208 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:58:28.846174  481208 cache.go:56] Caching tarball of preloaded images
	I0819 19:58:28.846273  481208 preload.go:172] Found /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:58:28.846288  481208 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 19:58:28.846469  481208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/config.json ...
	I0819 19:58:28.846739  481208 start.go:360] acquireMachinesLock for pause-232147: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:58:31.358832  481208 start.go:364] duration metric: took 2.512017455s to acquireMachinesLock for "pause-232147"
	I0819 19:58:31.358880  481208 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:58:31.358892  481208 fix.go:54] fixHost starting: 
	I0819 19:58:31.359362  481208 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:31.359419  481208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:31.381807  481208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42833
	I0819 19:58:31.382291  481208 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:31.382874  481208 main.go:141] libmachine: Using API Version  1
	I0819 19:58:31.382895  481208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:31.383285  481208 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:31.383487  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:31.383651  481208 main.go:141] libmachine: (pause-232147) Calling .GetState
	I0819 19:58:31.385400  481208 fix.go:112] recreateIfNeeded on pause-232147: state=Running err=<nil>
	W0819 19:58:31.385443  481208 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:58:31.387793  481208 out.go:177] * Updating the running kvm2 "pause-232147" VM ...
	I0819 19:58:31.388963  481208 machine.go:93] provisionDockerMachine start ...
	I0819 19:58:31.388997  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:31.389297  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:31.392309  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.392752  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:31.392784  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.392909  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:31.393153  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:31.393333  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:31.393480  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:31.393664  481208 main.go:141] libmachine: Using SSH client type: native
	I0819 19:58:31.393860  481208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0819 19:58:31.393871  481208 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:58:31.519088  481208 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-232147
	
	I0819 19:58:31.519130  481208 main.go:141] libmachine: (pause-232147) Calling .GetMachineName
	I0819 19:58:31.519424  481208 buildroot.go:166] provisioning hostname "pause-232147"
	I0819 19:58:31.519456  481208 main.go:141] libmachine: (pause-232147) Calling .GetMachineName
	I0819 19:58:31.519708  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:31.524012  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.524468  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:31.524512  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.524877  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:31.525160  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:31.525378  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:31.525600  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:31.525797  481208 main.go:141] libmachine: Using SSH client type: native
	I0819 19:58:31.526030  481208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0819 19:58:31.526050  481208 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-232147 && echo "pause-232147" | sudo tee /etc/hostname
	I0819 19:58:31.684523  481208 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-232147
	
	I0819 19:58:31.684572  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:31.689946  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.690907  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:31.690949  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.693424  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:31.693677  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:31.693860  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:31.694050  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:31.694292  481208 main.go:141] libmachine: Using SSH client type: native
	I0819 19:58:31.694555  481208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0819 19:58:31.694581  481208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-232147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-232147/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-232147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:58:31.820537  481208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:58:31.820572  481208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 19:58:31.820616  481208 buildroot.go:174] setting up certificates
	I0819 19:58:31.820631  481208 provision.go:84] configureAuth start
	I0819 19:58:31.820645  481208 main.go:141] libmachine: (pause-232147) Calling .GetMachineName
	I0819 19:58:31.821054  481208 main.go:141] libmachine: (pause-232147) Calling .GetIP
	I0819 19:58:31.824252  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.824872  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:31.824899  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.824952  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:31.828009  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.828405  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:31.828430  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.828755  481208 provision.go:143] copyHostCerts
	I0819 19:58:31.828816  481208 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 19:58:31.828837  481208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:58:31.828913  481208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 19:58:31.829048  481208 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 19:58:31.829059  481208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:58:31.829089  481208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 19:58:31.829219  481208 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 19:58:31.829233  481208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:58:31.829267  481208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 19:58:31.829338  481208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.pause-232147 san=[127.0.0.1 192.168.50.125 localhost minikube pause-232147]
	I0819 19:58:32.050961  481208 provision.go:177] copyRemoteCerts
	I0819 19:58:32.051050  481208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:58:32.051084  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:32.062514  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:32.206080  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:32.206125  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:32.206621  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:32.206873  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:32.207097  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:32.207306  481208 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/pause-232147/id_rsa Username:docker}
	I0819 19:58:32.300698  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:58:32.331302  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0819 19:58:32.366804  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:58:32.396394  481208 provision.go:87] duration metric: took 575.744609ms to configureAuth
	I0819 19:58:32.396520  481208 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:58:32.396872  481208 config.go:182] Loaded profile config "pause-232147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:58:32.396984  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:32.800274  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:32.800754  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:32.800817  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:32.801001  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:32.801269  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:32.801444  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:32.801589  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:32.801804  481208 main.go:141] libmachine: Using SSH client type: native
	I0819 19:58:32.802033  481208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0819 19:58:32.802058  481208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:58:40.212535  481208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:58:40.212579  481208 machine.go:96] duration metric: took 8.823594081s to provisionDockerMachine
	I0819 19:58:40.212595  481208 start.go:293] postStartSetup for "pause-232147" (driver="kvm2")
	I0819 19:58:40.212609  481208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:58:40.212642  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:40.213057  481208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:58:40.213092  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:40.216311  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.216817  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:40.216844  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.217076  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:40.217330  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:40.217515  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:40.217682  481208 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/pause-232147/id_rsa Username:docker}
	I0819 19:58:40.316283  481208 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:58:40.322399  481208 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:58:40.322447  481208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 19:58:40.322557  481208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 19:58:40.322676  481208 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 19:58:40.322820  481208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:58:40.337792  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:58:40.372596  481208 start.go:296] duration metric: took 159.984571ms for postStartSetup
	I0819 19:58:40.372650  481208 fix.go:56] duration metric: took 9.01375792s for fixHost
	I0819 19:58:40.372680  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:40.376119  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.376610  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:40.376639  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.376989  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:40.377312  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:40.377518  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:40.377676  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:40.377858  481208 main.go:141] libmachine: Using SSH client type: native
	I0819 19:58:40.378087  481208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0819 19:58:40.378105  481208 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:58:40.507374  481208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724097520.497291171
	
	I0819 19:58:40.507408  481208 fix.go:216] guest clock: 1724097520.497291171
	I0819 19:58:40.507418  481208 fix.go:229] Guest: 2024-08-19 19:58:40.497291171 +0000 UTC Remote: 2024-08-19 19:58:40.372656161 +0000 UTC m=+11.987187457 (delta=124.63501ms)
	I0819 19:58:40.507448  481208 fix.go:200] guest clock delta is within tolerance: 124.63501ms
	I0819 19:58:40.507456  481208 start.go:83] releasing machines lock for "pause-232147", held for 9.148597464s
	I0819 19:58:40.507935  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:40.508287  481208 main.go:141] libmachine: (pause-232147) Calling .GetIP
	I0819 19:58:40.513009  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.513574  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:40.513608  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.514007  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:40.514704  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:40.514942  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:40.515039  481208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:58:40.515086  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:40.515181  481208 ssh_runner.go:195] Run: cat /version.json
	I0819 19:58:40.515194  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:40.519584  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.519937  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.520469  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:40.520648  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.520677  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:40.520717  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.521387  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:40.521428  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:40.521678  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:40.521854  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:40.522053  481208 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/pause-232147/id_rsa Username:docker}
	I0819 19:58:40.522692  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:40.522868  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:40.523041  481208 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/pause-232147/id_rsa Username:docker}
	I0819 19:58:40.639459  481208 ssh_runner.go:195] Run: systemctl --version
	I0819 19:58:40.655718  481208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:58:40.840962  481208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:58:40.853198  481208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:58:40.853277  481208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:58:40.868796  481208 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 19:58:40.868827  481208 start.go:495] detecting cgroup driver to use...
	I0819 19:58:40.868899  481208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:58:40.900245  481208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:58:40.920400  481208 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:58:40.920463  481208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:58:40.938443  481208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:58:40.955169  481208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:58:41.146319  481208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:58:41.353301  481208 docker.go:233] disabling docker service ...
	I0819 19:58:41.353400  481208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:58:41.387771  481208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:58:41.412948  481208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:58:41.554623  481208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:58:41.861017  481208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:58:41.982330  481208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:58:42.076111  481208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:58:42.076186  481208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.109280  481208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:58:42.109371  481208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.145729  481208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.189101  481208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.240615  481208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:58:42.274951  481208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.289026  481208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.305852  481208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.326459  481208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:58:42.346162  481208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:58:42.356878  481208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:58:42.608928  481208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:58:43.169953  481208 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:58:43.170036  481208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:58:43.176235  481208 start.go:563] Will wait 60s for crictl version
	I0819 19:58:43.176307  481208 ssh_runner.go:195] Run: which crictl
	I0819 19:58:43.180628  481208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:58:43.216848  481208 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:58:43.216956  481208 ssh_runner.go:195] Run: crio --version
	I0819 19:58:43.253336  481208 ssh_runner.go:195] Run: crio --version
	I0819 19:58:43.293348  481208 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:58:43.294479  481208 main.go:141] libmachine: (pause-232147) Calling .GetIP
	I0819 19:58:43.297894  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:43.298326  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:43.298362  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:43.298669  481208 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 19:58:43.303270  481208 kubeadm.go:883] updating cluster {Name:pause-232147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:pause-232147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:58:43.303482  481208 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:58:43.303557  481208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:58:43.362917  481208 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:58:43.362952  481208 crio.go:433] Images already preloaded, skipping extraction
	I0819 19:58:43.363019  481208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:58:43.405430  481208 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:58:43.405463  481208 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:58:43.405474  481208 kubeadm.go:934] updating node { 192.168.50.125 8443 v1.31.0 crio true true} ...
	I0819 19:58:43.405617  481208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-232147 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:pause-232147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:58:43.405717  481208 ssh_runner.go:195] Run: crio config
	I0819 19:58:43.466333  481208 cni.go:84] Creating CNI manager for ""
	I0819 19:58:43.466366  481208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:58:43.466378  481208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:58:43.466409  481208 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-232147 NodeName:pause-232147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:58:43.466606  481208 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-232147"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:58:43.466692  481208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:58:43.480800  481208 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:58:43.480943  481208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:58:43.494098  481208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0819 19:58:43.518868  481208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:58:43.550730  481208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0819 19:58:43.575823  481208 ssh_runner.go:195] Run: grep 192.168.50.125	control-plane.minikube.internal$ /etc/hosts
	I0819 19:58:43.582039  481208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:58:43.767732  481208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:58:43.821156  481208 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147 for IP: 192.168.50.125
	I0819 19:58:43.821187  481208 certs.go:194] generating shared ca certs ...
	I0819 19:58:43.821211  481208 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:43.821396  481208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 19:58:43.821450  481208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 19:58:43.821467  481208 certs.go:256] generating profile certs ...
	I0819 19:58:43.821620  481208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/client.key
	I0819 19:58:43.821705  481208 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/apiserver.key.bef1e027
	I0819 19:58:43.821761  481208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/proxy-client.key
	I0819 19:58:43.821912  481208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 19:58:43.821949  481208 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 19:58:43.821958  481208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 19:58:43.821988  481208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:58:43.822021  481208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:58:43.822045  481208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 19:58:43.822096  481208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:58:43.823008  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:58:44.086925  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:58:44.262859  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:58:44.445056  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 19:58:44.554734  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 19:58:44.659085  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:58:44.730751  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:58:44.769563  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 19:58:44.811174  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 19:58:44.845022  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 19:58:44.888275  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:58:44.931824  481208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:58:45.018160  481208 ssh_runner.go:195] Run: openssl version
	I0819 19:58:45.028573  481208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 19:58:45.044017  481208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 19:58:45.052240  481208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 19:58:45.052330  481208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 19:58:45.061553  481208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:58:45.076326  481208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:58:45.096827  481208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:58:45.103822  481208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:58:45.104009  481208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:58:45.112205  481208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:58:45.124865  481208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 19:58:45.140513  481208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 19:58:45.146811  481208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 19:58:45.146908  481208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 19:58:45.154991  481208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 19:58:45.174607  481208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:58:45.180731  481208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:58:45.188748  481208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:58:45.196496  481208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:58:45.204894  481208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:58:45.216659  481208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:58:45.225302  481208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:58:45.237522  481208 kubeadm.go:392] StartCluster: {Name:pause-232147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:pause-232147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:58:45.237752  481208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:58:45.237831  481208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:58:45.312153  481208 cri.go:89] found id: "ca316e974a78cefb52e09931ddf40c366eeaf95d3e88f530720554bdce23601f"
	I0819 19:58:45.312251  481208 cri.go:89] found id: "97e775e2fa7659e0e7ef22311ebe63cdb630f6e9688a1c52957d409ff88c9606"
	I0819 19:58:45.312273  481208 cri.go:89] found id: "8de8790b20b2cd7839093a8c87b5f67212ad9f970d60b7768d50e60d935439ae"
	I0819 19:58:45.312302  481208 cri.go:89] found id: "a21d2be3b0654f535a0b85beab522a910e60da1332a4ca91a69518797c176bc3"
	I0819 19:58:45.312332  481208 cri.go:89] found id: "bd77b49b539b66437d83a597edae7053b4aa97458faf8bdca5e1291db42c82a2"
	I0819 19:58:45.312347  481208 cri.go:89] found id: "87de891f8b0c8d70cd1ed1e52f9bb91ad789661b868c82d342085e78f45e8af0"
	I0819 19:58:45.312367  481208 cri.go:89] found id: "ba564d4d374b6de35552277a9f888a707e3fcc74a84da8bf6e8a43763dbe7a5c"
	I0819 19:58:45.312408  481208 cri.go:89] found id: "c362bfb09b902727dca16cc486a92f740411447ccf8a54937f1a2ce6b4861b94"
	I0819 19:58:45.312435  481208 cri.go:89] found id: ""
	I0819 19:58:45.312531  481208 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-232147 -n pause-232147
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-232147 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-232147 logs -n 25: (1.434408619s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-072157 sudo                                | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo cat                            | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo docker                         | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo                                | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo                                | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo cat                            | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo                                | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo                                | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo                                | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo cat                            | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo cat                            | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo                                | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo                                | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo                                | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo find                           | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo crio                           | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p cilium-072157                                     | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC | 19 Aug 24 19:57 UTC |
	| start   | -p pause-232147 --memory=2048                        | pause-232147           | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC | 19 Aug 24 19:58 UTC |
	|         | --install-addons=false                               |                        |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| start   | -p cert-expiration-228973                            | cert-expiration-228973 | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC | 19 Aug 24 19:58 UTC |
	|         | --memory=2048                                        |                        |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                        |         |         |                     |                     |
	|         | --driver=kvm2                                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| start   | -p NoKubernetes-803941                               | NoKubernetes-803941    | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC | 19 Aug 24 19:58 UTC |
	|         | --no-kubernetes --driver=kvm2                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| start   | -p running-upgrade-814149                            | running-upgrade-814149 | jenkins | v1.33.1 | 19 Aug 24 19:58 UTC |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                    |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| start   | -p pause-232147                                      | pause-232147           | jenkins | v1.33.1 | 19 Aug 24 19:58 UTC | 19 Aug 24 19:59 UTC |
	|         | --alsologtostderr                                    |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| delete  | -p NoKubernetes-803941                               | NoKubernetes-803941    | jenkins | v1.33.1 | 19 Aug 24 19:58 UTC | 19 Aug 24 19:58 UTC |
	| start   | -p NoKubernetes-803941                               | NoKubernetes-803941    | jenkins | v1.33.1 | 19 Aug 24 19:58 UTC | 19 Aug 24 19:59 UTC |
	|         | --no-kubernetes --driver=kvm2                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| ssh     | -p NoKubernetes-803941 sudo                          | NoKubernetes-803941    | jenkins | v1.33.1 | 19 Aug 24 19:59 UTC |                     |
	|         | systemctl is-active --quiet                          |                        |         |         |                     |                     |
	|         | service kubelet                                      |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:58:33
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:58:33.071169  481365 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:58:33.071262  481365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:58:33.071265  481365 out.go:358] Setting ErrFile to fd 2...
	I0819 19:58:33.071268  481365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:58:33.071469  481365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:58:33.072056  481365 out.go:352] Setting JSON to false
	I0819 19:58:33.073111  481365 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13264,"bootTime":1724084249,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:58:33.073194  481365 start.go:139] virtualization: kvm guest
	I0819 19:58:33.076478  481365 out.go:177] * [NoKubernetes-803941] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:58:33.077722  481365 notify.go:220] Checking for updates...
	I0819 19:58:33.077742  481365 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 19:58:33.079016  481365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:58:33.080282  481365 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:58:33.081707  481365 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:58:33.083105  481365 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:58:33.084250  481365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:58:33.085985  481365 config.go:182] Loaded profile config "cert-expiration-228973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:58:33.086180  481365 config.go:182] Loaded profile config "pause-232147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:58:33.086344  481365 config.go:182] Loaded profile config "running-upgrade-814149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0819 19:58:33.086370  481365 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0819 19:58:33.086475  481365 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 19:58:33.126767  481365 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 19:58:33.128018  481365 start.go:297] selected driver: kvm2
	I0819 19:58:33.128036  481365 start.go:901] validating driver "kvm2" against <nil>
	I0819 19:58:33.128051  481365 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:58:33.128491  481365 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0819 19:58:33.128569  481365 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:58:33.128660  481365 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:58:33.146060  481365 install.go:137] /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:58:33.146119  481365 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 19:58:33.146663  481365 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0819 19:58:33.146867  481365 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 19:58:33.146918  481365 cni.go:84] Creating CNI manager for ""
	I0819 19:58:33.146928  481365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:58:33.146934  481365 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 19:58:33.146946  481365 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0819 19:58:33.146990  481365 start.go:340] cluster config:
	{Name:NoKubernetes-803941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-803941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:58:33.147091  481365 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:58:33.148818  481365 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-803941
	I0819 19:58:31.388963  481208 machine.go:93] provisionDockerMachine start ...
	I0819 19:58:31.388997  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:31.389297  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:31.392309  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.392752  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:31.392784  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.392909  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:31.393153  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:31.393333  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:31.393480  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:31.393664  481208 main.go:141] libmachine: Using SSH client type: native
	I0819 19:58:31.393860  481208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0819 19:58:31.393871  481208 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:58:31.519088  481208 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-232147
	
	I0819 19:58:31.519130  481208 main.go:141] libmachine: (pause-232147) Calling .GetMachineName
	I0819 19:58:31.519424  481208 buildroot.go:166] provisioning hostname "pause-232147"
	I0819 19:58:31.519456  481208 main.go:141] libmachine: (pause-232147) Calling .GetMachineName
	I0819 19:58:31.519708  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:31.524012  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.524468  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:31.524512  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.524877  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:31.525160  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:31.525378  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:31.525600  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:31.525797  481208 main.go:141] libmachine: Using SSH client type: native
	I0819 19:58:31.526030  481208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0819 19:58:31.526050  481208 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-232147 && echo "pause-232147" | sudo tee /etc/hostname
	I0819 19:58:31.684523  481208 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-232147
	
	I0819 19:58:31.684572  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:31.689946  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.690907  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:31.690949  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.693424  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:31.693677  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:31.693860  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:31.694050  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:31.694292  481208 main.go:141] libmachine: Using SSH client type: native
	I0819 19:58:31.694555  481208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0819 19:58:31.694581  481208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-232147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-232147/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-232147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:58:31.820537  481208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:58:31.820572  481208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 19:58:31.820616  481208 buildroot.go:174] setting up certificates
	I0819 19:58:31.820631  481208 provision.go:84] configureAuth start
	I0819 19:58:31.820645  481208 main.go:141] libmachine: (pause-232147) Calling .GetMachineName
	I0819 19:58:31.821054  481208 main.go:141] libmachine: (pause-232147) Calling .GetIP
	I0819 19:58:31.824252  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.824872  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:31.824899  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.824952  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:31.828009  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.828405  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:31.828430  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.828755  481208 provision.go:143] copyHostCerts
	I0819 19:58:31.828816  481208 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 19:58:31.828837  481208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:58:31.828913  481208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 19:58:31.829048  481208 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 19:58:31.829059  481208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:58:31.829089  481208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 19:58:31.829219  481208 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 19:58:31.829233  481208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:58:31.829267  481208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 19:58:31.829338  481208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.pause-232147 san=[127.0.0.1 192.168.50.125 localhost minikube pause-232147]
	I0819 19:58:32.050961  481208 provision.go:177] copyRemoteCerts
	I0819 19:58:32.051050  481208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:58:32.051084  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:32.062514  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:32.206080  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:32.206125  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:32.206621  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:32.206873  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:32.207097  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:32.207306  481208 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/pause-232147/id_rsa Username:docker}
	I0819 19:58:32.300698  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:58:32.331302  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0819 19:58:32.366804  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:58:32.396394  481208 provision.go:87] duration metric: took 575.744609ms to configureAuth
	I0819 19:58:32.396520  481208 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:58:32.396872  481208 config.go:182] Loaded profile config "pause-232147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:58:32.396984  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:32.800274  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:32.800754  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:32.800817  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:32.801001  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:32.801269  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:32.801444  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:32.801589  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:32.801804  481208 main.go:141] libmachine: Using SSH client type: native
	I0819 19:58:32.802033  481208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0819 19:58:32.802058  481208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:58:29.839179  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:29.839213  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:29.839373  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHHostname
	I0819 19:58:29.842006  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:29.842502  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:29.842588  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:29.842655  481009 provision.go:143] copyHostCerts
	I0819 19:58:29.842725  481009 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 19:58:29.842748  481009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:58:29.842815  481009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 19:58:29.842938  481009 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 19:58:29.842950  481009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:58:29.842982  481009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 19:58:29.843059  481009 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 19:58:29.843069  481009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:58:29.843096  481009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 19:58:29.843163  481009 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-814149 san=[127.0.0.1 192.168.39.238 localhost minikube running-upgrade-814149]
	I0819 19:58:30.035337  481009 provision.go:177] copyRemoteCerts
	I0819 19:58:30.035422  481009 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:58:30.035468  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHHostname
	I0819 19:58:30.038988  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:30.039800  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:30.039836  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:30.039857  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHPort
	I0819 19:58:30.040104  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHKeyPath
	I0819 19:58:30.040295  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHUsername
	I0819 19:58:30.040490  481009 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/running-upgrade-814149/id_rsa Username:docker}
	I0819 19:58:30.185597  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:58:30.278587  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 19:58:30.312547  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 19:58:30.385191  481009 provision.go:87] duration metric: took 549.928883ms to configureAuth
	I0819 19:58:30.385225  481009 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:58:30.385458  481009 config.go:182] Loaded profile config "running-upgrade-814149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0819 19:58:30.385557  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHHostname
	I0819 19:58:30.388596  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:30.389062  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:30.389105  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:30.389308  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHPort
	I0819 19:58:30.389572  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHKeyPath
	I0819 19:58:30.389902  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHKeyPath
	I0819 19:58:30.390113  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHUsername
	I0819 19:58:30.390315  481009 main.go:141] libmachine: Using SSH client type: native
	I0819 19:58:30.390527  481009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 19:58:30.390548  481009 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:58:31.083722  481009 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:58:31.083755  481009 machine.go:96] duration metric: took 1.895408222s to provisionDockerMachine
	I0819 19:58:31.083772  481009 start.go:293] postStartSetup for "running-upgrade-814149" (driver="kvm2")
	I0819 19:58:31.083786  481009 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:58:31.083834  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .DriverName
	I0819 19:58:31.084190  481009 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:58:31.084221  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHHostname
	I0819 19:58:31.087008  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.087479  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:31.087511  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.087705  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHPort
	I0819 19:58:31.087925  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHKeyPath
	I0819 19:58:31.088085  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHUsername
	I0819 19:58:31.088272  481009 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/running-upgrade-814149/id_rsa Username:docker}
	I0819 19:58:31.181029  481009 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:58:31.185362  481009 info.go:137] Remote host: Buildroot 2021.02.12
	I0819 19:58:31.185398  481009 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 19:58:31.185484  481009 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 19:58:31.185586  481009 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 19:58:31.185706  481009 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:58:31.194293  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:58:31.218716  481009 start.go:296] duration metric: took 134.925617ms for postStartSetup
	I0819 19:58:31.218769  481009 fix.go:56] duration metric: took 2.059278178s for fixHost
	I0819 19:58:31.218798  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHHostname
	I0819 19:58:31.222041  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.222476  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:31.222508  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.222734  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHPort
	I0819 19:58:31.222980  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHKeyPath
	I0819 19:58:31.223165  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHKeyPath
	I0819 19:58:31.223379  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHUsername
	I0819 19:58:31.223607  481009 main.go:141] libmachine: Using SSH client type: native
	I0819 19:58:31.223834  481009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 19:58:31.223854  481009 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:58:31.358639  481009 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724097511.352223012
	
	I0819 19:58:31.358668  481009 fix.go:216] guest clock: 1724097511.352223012
	I0819 19:58:31.358679  481009 fix.go:229] Guest: 2024-08-19 19:58:31.352223012 +0000 UTC Remote: 2024-08-19 19:58:31.218774838 +0000 UTC m=+21.422283084 (delta=133.448174ms)
	I0819 19:58:31.358706  481009 fix.go:200] guest clock delta is within tolerance: 133.448174ms
	I0819 19:58:31.358713  481009 start.go:83] releasing machines lock for "running-upgrade-814149", held for 2.199259317s
	I0819 19:58:31.358744  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .DriverName
	I0819 19:58:31.359065  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetIP
	I0819 19:58:31.362320  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.362720  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:31.362754  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.362922  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .DriverName
	I0819 19:58:31.363559  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .DriverName
	I0819 19:58:31.363825  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .DriverName
	I0819 19:58:31.363997  481009 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:58:31.364069  481009 ssh_runner.go:195] Run: cat /version.json
	I0819 19:58:31.364099  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHHostname
	I0819 19:58:31.364129  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHHostname
	I0819 19:58:31.367135  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.367356  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.367581  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:31.367604  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.367750  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:31.367821  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.368118  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHPort
	I0819 19:58:31.368151  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHPort
	I0819 19:58:31.368349  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHKeyPath
	I0819 19:58:31.368495  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHKeyPath
	I0819 19:58:31.368524  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHUsername
	I0819 19:58:31.368698  481009 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/running-upgrade-814149/id_rsa Username:docker}
	I0819 19:58:31.368714  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHUsername
	I0819 19:58:31.369060  481009 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/running-upgrade-814149/id_rsa Username:docker}
	W0819 19:58:31.491537  481009 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0819 19:58:31.491637  481009 ssh_runner.go:195] Run: systemctl --version
	I0819 19:58:31.497813  481009 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:58:31.655864  481009 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:58:31.663634  481009 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:58:31.663722  481009 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:58:31.689360  481009 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:58:31.689387  481009 start.go:495] detecting cgroup driver to use...
	I0819 19:58:31.689462  481009 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:58:31.711688  481009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:58:31.725396  481009 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:58:31.725459  481009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:58:31.740753  481009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:58:31.762925  481009 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:58:32.034938  481009 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:58:32.285880  481009 docker.go:233] disabling docker service ...
	I0819 19:58:32.285952  481009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:58:32.324663  481009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:58:32.351309  481009 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:58:32.619261  481009 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:58:32.827081  481009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:58:32.843478  481009 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:58:32.864246  481009 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0819 19:58:32.864308  481009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:32.874899  481009 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:58:32.874998  481009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:32.885595  481009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:32.895009  481009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:32.904900  481009 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:58:32.914721  481009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:32.931214  481009 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:32.962451  481009 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:32.974570  481009 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:58:32.985846  481009 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:58:32.996448  481009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:58:33.186818  481009 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:58:33.723342  481009 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:58:33.723433  481009 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:58:33.728541  481009 start.go:563] Will wait 60s for crictl version
	I0819 19:58:33.728615  481009 ssh_runner.go:195] Run: which crictl
	I0819 19:58:33.732579  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:58:33.761608  481009 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.22.3
	RuntimeApiVersion:  v1alpha2
	I0819 19:58:33.761704  481009 ssh_runner.go:195] Run: crio --version
	I0819 19:58:33.797721  481009 ssh_runner.go:195] Run: crio --version
	I0819 19:58:33.849409  481009 out.go:177] * Preparing Kubernetes v1.24.1 on CRI-O 1.22.3 ...
	I0819 19:58:33.850534  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetIP
	I0819 19:58:33.853502  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:33.854097  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:33.854127  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:33.854409  481009 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 19:58:33.858591  481009 kubeadm.go:883] updating cluster {Name:running-upgrade-814149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:runn
ing-upgrade-814149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0819 19:58:33.858708  481009 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0819 19:58:33.858769  481009 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:58:33.899847  481009 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0819 19:58:33.899937  481009 ssh_runner.go:195] Run: which lz4
	I0819 19:58:33.903531  481009 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:58:33.907185  481009 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:58:33.907227  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (496813465 bytes)
	I0819 19:58:33.150123  481365 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0819 19:58:33.244336  481365 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0819 19:58:33.244530  481365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/NoKubernetes-803941/config.json ...
	I0819 19:58:33.244581  481365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/NoKubernetes-803941/config.json: {Name:mkb98ef1899eab6381ae643e270a19ddb3eb8009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:33.244788  481365 start.go:360] acquireMachinesLock for NoKubernetes-803941: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:58:35.749247  481009 crio.go:462] duration metric: took 1.84575063s to copy over tarball
	I0819 19:58:35.749348  481009 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:58:39.681823  481009 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.932423821s)
	I0819 19:58:39.681866  481009 crio.go:469] duration metric: took 3.932583624s to extract the tarball
	I0819 19:58:39.681878  481009 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:58:39.729354  481009 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:58:39.765283  481009 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0819 19:58:39.765311  481009 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 19:58:39.765379  481009 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 19:58:39.765404  481009 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:58:39.765437  481009 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 19:58:39.765454  481009 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 19:58:39.765378  481009 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:58:39.765513  481009 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 19:58:39.765511  481009 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 19:58:39.765496  481009 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 19:58:39.767023  481009 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 19:58:39.767060  481009 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 19:58:39.767019  481009 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 19:58:39.767096  481009 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 19:58:39.767027  481009 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:58:39.767140  481009 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 19:58:39.767519  481009 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 19:58:39.767542  481009 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:58:40.132027  480165 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 19:58:40.132090  480165 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:58:40.132182  480165 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:58:40.132297  480165 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:58:40.132417  480165 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 19:58:40.132497  480165 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:58:40.133861  480165 out.go:235]   - Generating certificates and keys ...
	I0819 19:58:40.133979  480165 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:58:40.134056  480165 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:58:40.134140  480165 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 19:58:40.134217  480165 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 19:58:40.134291  480165 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 19:58:40.134346  480165 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 19:58:40.134407  480165 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 19:58:40.134554  480165 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-228973 localhost] and IPs [192.168.72.176 127.0.0.1 ::1]
	I0819 19:58:40.134615  480165 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 19:58:40.134766  480165 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-228973 localhost] and IPs [192.168.72.176 127.0.0.1 ::1]
	I0819 19:58:40.134850  480165 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 19:58:40.134923  480165 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 19:58:40.134975  480165 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 19:58:40.135039  480165 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:58:40.135100  480165 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:58:40.135165  480165 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 19:58:40.135227  480165 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:58:40.135303  480165 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:58:40.135367  480165 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:58:40.135458  480165 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:58:40.135541  480165 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:58:40.137056  480165 out.go:235]   - Booting up control plane ...
	I0819 19:58:40.137208  480165 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:58:40.137307  480165 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:58:40.137384  480165 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:58:40.137553  480165 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:58:40.137696  480165 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:58:40.137751  480165 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:58:40.137917  480165 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 19:58:40.138037  480165 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 19:58:40.138105  480165 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.633408ms
	I0819 19:58:40.138187  480165 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 19:58:40.138256  480165 kubeadm.go:310] [api-check] The API server is healthy after 6.001455262s
	I0819 19:58:40.138399  480165 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 19:58:40.138540  480165 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 19:58:40.138606  480165 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 19:58:40.138836  480165 kubeadm.go:310] [mark-control-plane] Marking the node cert-expiration-228973 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 19:58:40.138901  480165 kubeadm.go:310] [bootstrap-token] Using token: zlroav.q8awq3g8noywle77
	I0819 19:58:40.140385  480165 out.go:235]   - Configuring RBAC rules ...
	I0819 19:58:40.140536  480165 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 19:58:40.140690  480165 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 19:58:40.140927  480165 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 19:58:40.141168  480165 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 19:58:40.141303  480165 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 19:58:40.141411  480165 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 19:58:40.141592  480165 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 19:58:40.141677  480165 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 19:58:40.141737  480165 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 19:58:40.141742  480165 kubeadm.go:310] 
	I0819 19:58:40.141825  480165 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 19:58:40.141831  480165 kubeadm.go:310] 
	I0819 19:58:40.141934  480165 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 19:58:40.141939  480165 kubeadm.go:310] 
	I0819 19:58:40.141967  480165 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 19:58:40.142074  480165 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 19:58:40.142135  480165 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 19:58:40.142141  480165 kubeadm.go:310] 
	I0819 19:58:40.142222  480165 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 19:58:40.142229  480165 kubeadm.go:310] 
	I0819 19:58:40.142289  480165 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 19:58:40.142294  480165 kubeadm.go:310] 
	I0819 19:58:40.142352  480165 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 19:58:40.142448  480165 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 19:58:40.142531  480165 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 19:58:40.142536  480165 kubeadm.go:310] 
	I0819 19:58:40.142632  480165 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 19:58:40.142746  480165 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 19:58:40.142754  480165 kubeadm.go:310] 
	I0819 19:58:40.142868  480165 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zlroav.q8awq3g8noywle77 \
	I0819 19:58:40.142990  480165 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f \
	I0819 19:58:40.143022  480165 kubeadm.go:310] 	--control-plane 
	I0819 19:58:40.143026  480165 kubeadm.go:310] 
	I0819 19:58:40.143122  480165 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 19:58:40.143127  480165 kubeadm.go:310] 
	I0819 19:58:40.143246  480165 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zlroav.q8awq3g8noywle77 \
	I0819 19:58:40.143371  480165 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f 
	I0819 19:58:40.143400  480165 cni.go:84] Creating CNI manager for ""
	I0819 19:58:40.143409  480165 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:58:40.144947  480165 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:58:40.507535  481365 start.go:364] duration metric: took 7.262725011s to acquireMachinesLock for "NoKubernetes-803941"
	I0819 19:58:40.507590  481365 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-803941 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-803941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:58:40.507701  481365 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 19:58:40.146212  480165 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:58:40.160991  480165 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:58:40.190866  480165 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:58:40.190995  480165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:58:40.191019  480165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-228973 minikube.k8s.io/updated_at=2024_08_19T19_58_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8 minikube.k8s.io/name=cert-expiration-228973 minikube.k8s.io/primary=true
	I0819 19:58:40.563567  480165 ops.go:34] apiserver oom_adj: -16
	I0819 19:58:40.563615  480165 kubeadm.go:1113] duration metric: took 372.695701ms to wait for elevateKubeSystemPrivileges
	I0819 19:58:40.563632  480165 kubeadm.go:394] duration metric: took 13.560252715s to StartCluster
	I0819 19:58:40.563654  480165 settings.go:142] acquiring lock: {Name:mkc71def6be9966e5f008a22161bf3ed26f482b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:40.563726  480165 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:58:40.565397  480165 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/kubeconfig: {Name:mk1ae3aa741bb2460fc0d48f80867b991b4a0677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:40.565715  480165 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.176 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:58:40.565914  480165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 19:58:40.566193  480165 config.go:182] Loaded profile config "cert-expiration-228973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:58:40.566249  480165 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:58:40.566309  480165 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-228973"
	I0819 19:58:40.566344  480165 addons.go:234] Setting addon storage-provisioner=true in "cert-expiration-228973"
	I0819 19:58:40.566374  480165 host.go:66] Checking if "cert-expiration-228973" exists ...
	I0819 19:58:40.566788  480165 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:40.566810  480165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:40.566996  480165 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-228973"
	I0819 19:58:40.567024  480165 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-228973"
	I0819 19:58:40.567424  480165 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:40.567448  480165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:40.571221  480165 out.go:177] * Verifying Kubernetes components...
	I0819 19:58:40.572692  480165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:58:40.590073  480165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39877
	I0819 19:58:40.590543  480165 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:40.591122  480165 main.go:141] libmachine: Using API Version  1
	I0819 19:58:40.591136  480165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:40.591550  480165 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:40.591767  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetState
	I0819 19:58:40.595216  480165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44859
	I0819 19:58:40.595628  480165 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:40.596185  480165 main.go:141] libmachine: Using API Version  1
	I0819 19:58:40.596197  480165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:40.596605  480165 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:40.597231  480165 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:40.597265  480165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:40.603904  480165 addons.go:234] Setting addon default-storageclass=true in "cert-expiration-228973"
	I0819 19:58:40.603942  480165 host.go:66] Checking if "cert-expiration-228973" exists ...
	I0819 19:58:40.604392  480165 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:40.604438  480165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:40.623382  480165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0819 19:58:40.624023  480165 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:40.624713  480165 main.go:141] libmachine: Using API Version  1
	I0819 19:58:40.624726  480165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:40.625089  480165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35059
	I0819 19:58:40.625271  480165 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:40.625495  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetState
	I0819 19:58:40.625731  480165 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:40.626233  480165 main.go:141] libmachine: Using API Version  1
	I0819 19:58:40.626246  480165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:40.626717  480165 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:40.627550  480165 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:40.627583  480165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:40.629971  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .DriverName
	I0819 19:58:40.631759  480165 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:58:40.633505  480165 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:58:40.633521  480165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:58:40.633547  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetSSHHostname
	I0819 19:58:40.641887  480165 main.go:141] libmachine: (cert-expiration-228973) DBG | domain cert-expiration-228973 has defined MAC address 52:54:00:b8:48:94 in network mk-cert-expiration-228973
	I0819 19:58:40.660555  480165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43501
	I0819 19:58:40.661380  480165 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:40.662049  480165 main.go:141] libmachine: Using API Version  1
	I0819 19:58:40.662064  480165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:40.662519  480165 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:40.662697  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetState
	I0819 19:58:40.671048  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .DriverName
	I0819 19:58:40.671370  480165 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:58:40.671383  480165 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:58:40.671406  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetSSHHostname
	I0819 19:58:40.675791  480165 main.go:141] libmachine: (cert-expiration-228973) DBG | domain cert-expiration-228973 has defined MAC address 52:54:00:b8:48:94 in network mk-cert-expiration-228973
	I0819 19:58:40.681714  480165 main.go:141] libmachine: (cert-expiration-228973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:48:94", ip: ""} in network mk-cert-expiration-228973: {Iface:virbr4 ExpiryTime:2024-08-19 20:58:08 +0000 UTC Type:0 Mac:52:54:00:b8:48:94 Iaid: IPaddr:192.168.72.176 Prefix:24 Hostname:cert-expiration-228973 Clientid:01:52:54:00:b8:48:94}
	I0819 19:58:40.681739  480165 main.go:141] libmachine: (cert-expiration-228973) DBG | domain cert-expiration-228973 has defined IP address 192.168.72.176 and MAC address 52:54:00:b8:48:94 in network mk-cert-expiration-228973
	I0819 19:58:40.681865  480165 main.go:141] libmachine: (cert-expiration-228973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:48:94", ip: ""} in network mk-cert-expiration-228973: {Iface:virbr4 ExpiryTime:2024-08-19 20:58:08 +0000 UTC Type:0 Mac:52:54:00:b8:48:94 Iaid: IPaddr:192.168.72.176 Prefix:24 Hostname:cert-expiration-228973 Clientid:01:52:54:00:b8:48:94}
	I0819 19:58:40.681886  480165 main.go:141] libmachine: (cert-expiration-228973) DBG | domain cert-expiration-228973 has defined IP address 192.168.72.176 and MAC address 52:54:00:b8:48:94 in network mk-cert-expiration-228973
	I0819 19:58:40.682453  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetSSHPort
	I0819 19:58:40.682496  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetSSHPort
	I0819 19:58:40.682741  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetSSHKeyPath
	I0819 19:58:40.682780  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetSSHKeyPath
	I0819 19:58:40.682892  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetSSHUsername
	I0819 19:58:40.682929  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetSSHUsername
	I0819 19:58:40.683035  480165 sshutil.go:53] new ssh client: &{IP:192.168.72.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/cert-expiration-228973/id_rsa Username:docker}
	I0819 19:58:40.683069  480165 sshutil.go:53] new ssh client: &{IP:192.168.72.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/cert-expiration-228973/id_rsa Username:docker}
	I0819 19:58:40.846804  480165 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:58:40.847012  480165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 19:58:40.971166  480165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:58:40.984333  480165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:58:41.450229  480165 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0819 19:58:41.450394  480165 main.go:141] libmachine: Making call to close driver server
	I0819 19:58:41.450408  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .Close
	I0819 19:58:41.451593  480165 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:58:41.451656  480165 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:58:41.452584  480165 main.go:141] libmachine: (cert-expiration-228973) DBG | Closing plugin on server side
	I0819 19:58:41.452662  480165 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:58:41.452684  480165 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:58:41.452693  480165 main.go:141] libmachine: Making call to close driver server
	I0819 19:58:41.452700  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .Close
	I0819 19:58:41.453029  480165 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:58:41.453047  480165 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:58:41.470367  480165 main.go:141] libmachine: Making call to close driver server
	I0819 19:58:41.470383  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .Close
	I0819 19:58:41.470814  480165 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:58:41.470824  480165 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:58:41.827582  480165 main.go:141] libmachine: Making call to close driver server
	I0819 19:58:41.827607  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .Close
	I0819 19:58:41.827718  480165 api_server.go:72] duration metric: took 1.261974394s to wait for apiserver process to appear ...
	I0819 19:58:41.827729  480165 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:58:41.827746  480165 api_server.go:253] Checking apiserver healthz at https://192.168.72.176:8443/healthz ...
	I0819 19:58:41.830096  480165 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:58:41.830081  480165 main.go:141] libmachine: (cert-expiration-228973) DBG | Closing plugin on server side
	I0819 19:58:41.830112  480165 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:58:41.830122  480165 main.go:141] libmachine: Making call to close driver server
	I0819 19:58:41.830131  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .Close
	I0819 19:58:41.830544  480165 main.go:141] libmachine: (cert-expiration-228973) DBG | Closing plugin on server side
	I0819 19:58:41.830579  480165 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:58:41.830585  480165 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:58:41.833022  480165 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0819 19:58:41.834532  480165 addons.go:510] duration metric: took 1.26828143s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0819 19:58:41.846100  480165 api_server.go:279] https://192.168.72.176:8443/healthz returned 200:
	ok
	I0819 19:58:41.847765  480165 api_server.go:141] control plane version: v1.31.0
	I0819 19:58:41.847786  480165 api_server.go:131] duration metric: took 20.051025ms to wait for apiserver health ...
	I0819 19:58:41.847795  480165 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:58:41.855405  480165 system_pods.go:59] 5 kube-system pods found
	I0819 19:58:41.855436  480165 system_pods.go:61] "etcd-cert-expiration-228973" [7d26eeae-0512-409b-95df-c64266cd3b8a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:58:41.855447  480165 system_pods.go:61] "kube-apiserver-cert-expiration-228973" [b378412e-bc19-4356-9790-6ca9cbc293fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:58:41.855458  480165 system_pods.go:61] "kube-controller-manager-cert-expiration-228973" [6b3a63db-cf89-4eeb-9ad9-baab6d35b0ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:58:41.855466  480165 system_pods.go:61] "kube-scheduler-cert-expiration-228973" [5441b59b-f632-4ac0-ac19-3e51200f416e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:58:41.855472  480165 system_pods.go:61] "storage-provisioner" [de2418b7-a74d-4ce1-bf65-bbb72aecc537] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0819 19:58:41.855480  480165 system_pods.go:74] duration metric: took 7.678476ms to wait for pod list to return data ...
	I0819 19:58:41.855502  480165 kubeadm.go:582] duration metric: took 1.289757145s to wait for: map[apiserver:true system_pods:true]
	I0819 19:58:41.855518  480165 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:58:41.860294  480165 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:58:41.860310  480165 node_conditions.go:123] node cpu capacity is 2
	I0819 19:58:41.860319  480165 node_conditions.go:105] duration metric: took 4.798282ms to run NodePressure ...
	I0819 19:58:41.860330  480165 start.go:241] waiting for startup goroutines ...
	I0819 19:58:41.956408  480165 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-228973" context rescaled to 1 replicas
	I0819 19:58:41.956441  480165 start.go:246] waiting for cluster config update ...
	I0819 19:58:41.956451  480165 start.go:255] writing updated cluster config ...
	I0819 19:58:41.956715  480165 ssh_runner.go:195] Run: rm -f paused
	I0819 19:58:42.041300  480165 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:58:42.043158  480165 out.go:177] * Done! kubectl is now configured to use "cert-expiration-228973" cluster and "default" namespace by default
	I0819 19:58:40.509331  481365 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0819 19:58:40.509590  481365 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:40.509624  481365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:40.530931  481365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
	I0819 19:58:40.531598  481365 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:40.532164  481365 main.go:141] libmachine: Using API Version  1
	I0819 19:58:40.532176  481365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:40.532544  481365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:40.532730  481365 main.go:141] libmachine: (NoKubernetes-803941) Calling .GetMachineName
	I0819 19:58:40.532834  481365 main.go:141] libmachine: (NoKubernetes-803941) Calling .DriverName
	I0819 19:58:40.532960  481365 start.go:159] libmachine.API.Create for "NoKubernetes-803941" (driver="kvm2")
	I0819 19:58:40.532974  481365 client.go:168] LocalClient.Create starting
	I0819 19:58:40.533021  481365 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem
	I0819 19:58:40.533063  481365 main.go:141] libmachine: Decoding PEM data...
	I0819 19:58:40.533078  481365 main.go:141] libmachine: Parsing certificate...
	I0819 19:58:40.533266  481365 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem
	I0819 19:58:40.533292  481365 main.go:141] libmachine: Decoding PEM data...
	I0819 19:58:40.533311  481365 main.go:141] libmachine: Parsing certificate...
	I0819 19:58:40.533331  481365 main.go:141] libmachine: Running pre-create checks...
	I0819 19:58:40.533339  481365 main.go:141] libmachine: (NoKubernetes-803941) Calling .PreCreateCheck
	I0819 19:58:40.533877  481365 main.go:141] libmachine: (NoKubernetes-803941) Calling .GetConfigRaw
	I0819 19:58:40.534523  481365 main.go:141] libmachine: Creating machine...
	I0819 19:58:40.534534  481365 main.go:141] libmachine: (NoKubernetes-803941) Calling .Create
	I0819 19:58:40.534718  481365 main.go:141] libmachine: (NoKubernetes-803941) Creating KVM machine...
	I0819 19:58:40.536082  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | found existing default KVM network
	I0819 19:58:40.537730  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:40.537522  481419 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:3c:88:e7} reservation:<nil>}
	I0819 19:58:40.539077  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:40.538969  481419 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fb:75:fb} reservation:<nil>}
	I0819 19:58:40.540850  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:40.540741  481419 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000205fc0}
	I0819 19:58:40.540899  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | created network xml: 
	I0819 19:58:40.540910  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | <network>
	I0819 19:58:40.540919  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG |   <name>mk-NoKubernetes-803941</name>
	I0819 19:58:40.540935  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG |   <dns enable='no'/>
	I0819 19:58:40.540951  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG |   
	I0819 19:58:40.540963  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0819 19:58:40.540970  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG |     <dhcp>
	I0819 19:58:40.540978  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0819 19:58:40.540986  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG |     </dhcp>
	I0819 19:58:40.540992  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG |   </ip>
	I0819 19:58:40.540998  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG |   
	I0819 19:58:40.541003  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | </network>
	I0819 19:58:40.541012  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | 
	I0819 19:58:40.548622  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | trying to create private KVM network mk-NoKubernetes-803941 192.168.61.0/24...
	I0819 19:58:40.689282  481365 main.go:141] libmachine: (NoKubernetes-803941) Setting up store path in /home/jenkins/minikube-integration/19423-430949/.minikube/machines/NoKubernetes-803941 ...
	I0819 19:58:40.689306  481365 main.go:141] libmachine: (NoKubernetes-803941) Building disk image from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 19:58:40.689328  481365 main.go:141] libmachine: (NoKubernetes-803941) Downloading /home/jenkins/minikube-integration/19423-430949/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 19:58:40.689345  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | private KVM network mk-NoKubernetes-803941 192.168.61.0/24 created
	I0819 19:58:40.689360  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:40.681264  481419 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:58:41.057317  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:41.053122  481419 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/NoKubernetes-803941/id_rsa...
	I0819 19:58:41.138052  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:41.137928  481419 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/NoKubernetes-803941/NoKubernetes-803941.rawdisk...
	I0819 19:58:41.138189  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Writing magic tar header
	I0819 19:58:41.138211  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Writing SSH key tar header
	I0819 19:58:41.138377  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:41.138308  481419 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/NoKubernetes-803941 ...
	I0819 19:58:41.138485  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/NoKubernetes-803941
	I0819 19:58:41.138506  481365 main.go:141] libmachine: (NoKubernetes-803941) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/NoKubernetes-803941 (perms=drwx------)
	I0819 19:58:41.138531  481365 main.go:141] libmachine: (NoKubernetes-803941) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines (perms=drwxr-xr-x)
	I0819 19:58:41.138540  481365 main.go:141] libmachine: (NoKubernetes-803941) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube (perms=drwxr-xr-x)
	I0819 19:58:41.138549  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines
	I0819 19:58:41.138568  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:58:41.138576  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949
	I0819 19:58:41.138588  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 19:58:41.138595  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Checking permissions on dir: /home/jenkins
	I0819 19:58:41.138605  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Checking permissions on dir: /home
	I0819 19:58:41.138615  481365 main.go:141] libmachine: (NoKubernetes-803941) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949 (perms=drwxrwxr-x)
	I0819 19:58:41.138622  481365 main.go:141] libmachine: (NoKubernetes-803941) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 19:58:41.138631  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Skipping /home - not owner
	I0819 19:58:41.138637  481365 main.go:141] libmachine: (NoKubernetes-803941) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 19:58:41.138645  481365 main.go:141] libmachine: (NoKubernetes-803941) Creating domain...
	I0819 19:58:41.141098  481365 main.go:141] libmachine: (NoKubernetes-803941) define libvirt domain using xml: 
	I0819 19:58:41.141113  481365 main.go:141] libmachine: (NoKubernetes-803941) <domain type='kvm'>
	I0819 19:58:41.141119  481365 main.go:141] libmachine: (NoKubernetes-803941)   <name>NoKubernetes-803941</name>
	I0819 19:58:41.141124  481365 main.go:141] libmachine: (NoKubernetes-803941)   <memory unit='MiB'>6000</memory>
	I0819 19:58:41.141144  481365 main.go:141] libmachine: (NoKubernetes-803941)   <vcpu>2</vcpu>
	I0819 19:58:41.141150  481365 main.go:141] libmachine: (NoKubernetes-803941)   <features>
	I0819 19:58:41.141156  481365 main.go:141] libmachine: (NoKubernetes-803941)     <acpi/>
	I0819 19:58:41.141163  481365 main.go:141] libmachine: (NoKubernetes-803941)     <apic/>
	I0819 19:58:41.141169  481365 main.go:141] libmachine: (NoKubernetes-803941)     <pae/>
	I0819 19:58:41.141174  481365 main.go:141] libmachine: (NoKubernetes-803941)     
	I0819 19:58:41.141180  481365 main.go:141] libmachine: (NoKubernetes-803941)   </features>
	I0819 19:58:41.141186  481365 main.go:141] libmachine: (NoKubernetes-803941)   <cpu mode='host-passthrough'>
	I0819 19:58:41.141192  481365 main.go:141] libmachine: (NoKubernetes-803941)   
	I0819 19:58:41.141197  481365 main.go:141] libmachine: (NoKubernetes-803941)   </cpu>
	I0819 19:58:41.141203  481365 main.go:141] libmachine: (NoKubernetes-803941)   <os>
	I0819 19:58:41.141208  481365 main.go:141] libmachine: (NoKubernetes-803941)     <type>hvm</type>
	I0819 19:58:41.141215  481365 main.go:141] libmachine: (NoKubernetes-803941)     <boot dev='cdrom'/>
	I0819 19:58:41.141221  481365 main.go:141] libmachine: (NoKubernetes-803941)     <boot dev='hd'/>
	I0819 19:58:41.141227  481365 main.go:141] libmachine: (NoKubernetes-803941)     <bootmenu enable='no'/>
	I0819 19:58:41.141232  481365 main.go:141] libmachine: (NoKubernetes-803941)   </os>
	I0819 19:58:41.141240  481365 main.go:141] libmachine: (NoKubernetes-803941)   <devices>
	I0819 19:58:41.141247  481365 main.go:141] libmachine: (NoKubernetes-803941)     <disk type='file' device='cdrom'>
	I0819 19:58:41.141258  481365 main.go:141] libmachine: (NoKubernetes-803941)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/NoKubernetes-803941/boot2docker.iso'/>
	I0819 19:58:41.141272  481365 main.go:141] libmachine: (NoKubernetes-803941)       <target dev='hdc' bus='scsi'/>
	I0819 19:58:41.141279  481365 main.go:141] libmachine: (NoKubernetes-803941)       <readonly/>
	I0819 19:58:41.141283  481365 main.go:141] libmachine: (NoKubernetes-803941)     </disk>
	I0819 19:58:41.141299  481365 main.go:141] libmachine: (NoKubernetes-803941)     <disk type='file' device='disk'>
	I0819 19:58:41.141308  481365 main.go:141] libmachine: (NoKubernetes-803941)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 19:58:41.141319  481365 main.go:141] libmachine: (NoKubernetes-803941)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/NoKubernetes-803941/NoKubernetes-803941.rawdisk'/>
	I0819 19:58:41.141324  481365 main.go:141] libmachine: (NoKubernetes-803941)       <target dev='hda' bus='virtio'/>
	I0819 19:58:41.141331  481365 main.go:141] libmachine: (NoKubernetes-803941)     </disk>
	I0819 19:58:41.141337  481365 main.go:141] libmachine: (NoKubernetes-803941)     <interface type='network'>
	I0819 19:58:41.141345  481365 main.go:141] libmachine: (NoKubernetes-803941)       <source network='mk-NoKubernetes-803941'/>
	I0819 19:58:41.141351  481365 main.go:141] libmachine: (NoKubernetes-803941)       <model type='virtio'/>
	I0819 19:58:41.141358  481365 main.go:141] libmachine: (NoKubernetes-803941)     </interface>
	I0819 19:58:41.141364  481365 main.go:141] libmachine: (NoKubernetes-803941)     <interface type='network'>
	I0819 19:58:41.141372  481365 main.go:141] libmachine: (NoKubernetes-803941)       <source network='default'/>
	I0819 19:58:41.141378  481365 main.go:141] libmachine: (NoKubernetes-803941)       <model type='virtio'/>
	I0819 19:58:41.141385  481365 main.go:141] libmachine: (NoKubernetes-803941)     </interface>
	I0819 19:58:41.141390  481365 main.go:141] libmachine: (NoKubernetes-803941)     <serial type='pty'>
	I0819 19:58:41.141397  481365 main.go:141] libmachine: (NoKubernetes-803941)       <target port='0'/>
	I0819 19:58:41.141403  481365 main.go:141] libmachine: (NoKubernetes-803941)     </serial>
	I0819 19:58:41.141409  481365 main.go:141] libmachine: (NoKubernetes-803941)     <console type='pty'>
	I0819 19:58:41.141414  481365 main.go:141] libmachine: (NoKubernetes-803941)       <target type='serial' port='0'/>
	I0819 19:58:41.141419  481365 main.go:141] libmachine: (NoKubernetes-803941)     </console>
	I0819 19:58:41.141424  481365 main.go:141] libmachine: (NoKubernetes-803941)     <rng model='virtio'>
	I0819 19:58:41.141432  481365 main.go:141] libmachine: (NoKubernetes-803941)       <backend model='random'>/dev/random</backend>
	I0819 19:58:41.141438  481365 main.go:141] libmachine: (NoKubernetes-803941)     </rng>
	I0819 19:58:41.141443  481365 main.go:141] libmachine: (NoKubernetes-803941)     
	I0819 19:58:41.141448  481365 main.go:141] libmachine: (NoKubernetes-803941)     
	I0819 19:58:41.141453  481365 main.go:141] libmachine: (NoKubernetes-803941)   </devices>
	I0819 19:58:41.141458  481365 main.go:141] libmachine: (NoKubernetes-803941) </domain>
	I0819 19:58:41.141469  481365 main.go:141] libmachine: (NoKubernetes-803941) 
	I0819 19:58:41.148013  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:f6:ea:95 in network default
	I0819 19:58:41.148424  481365 main.go:141] libmachine: (NoKubernetes-803941) Ensuring networks are active...
	I0819 19:58:41.148447  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:40:8f:2e in network mk-NoKubernetes-803941
	I0819 19:58:41.150010  481365 main.go:141] libmachine: (NoKubernetes-803941) Ensuring network default is active
	I0819 19:58:41.150483  481365 main.go:141] libmachine: (NoKubernetes-803941) Ensuring network mk-NoKubernetes-803941 is active
	I0819 19:58:41.151535  481365 main.go:141] libmachine: (NoKubernetes-803941) Getting domain xml...
	I0819 19:58:41.152576  481365 main.go:141] libmachine: (NoKubernetes-803941) Creating domain...
	I0819 19:58:42.936550  481365 main.go:141] libmachine: (NoKubernetes-803941) Waiting to get IP...
	I0819 19:58:42.937470  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:40:8f:2e in network mk-NoKubernetes-803941
	I0819 19:58:42.938079  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | unable to find current IP address of domain NoKubernetes-803941 in network mk-NoKubernetes-803941
	I0819 19:58:42.938140  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:42.938073  481419 retry.go:31] will retry after 233.031281ms: waiting for machine to come up
	I0819 19:58:40.212535  481208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:58:40.212579  481208 machine.go:96] duration metric: took 8.823594081s to provisionDockerMachine
	I0819 19:58:40.212595  481208 start.go:293] postStartSetup for "pause-232147" (driver="kvm2")
	I0819 19:58:40.212609  481208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:58:40.212642  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:40.213057  481208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:58:40.213092  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:40.216311  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.216817  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:40.216844  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.217076  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:40.217330  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:40.217515  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:40.217682  481208 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/pause-232147/id_rsa Username:docker}
	I0819 19:58:40.316283  481208 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:58:40.322399  481208 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:58:40.322447  481208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 19:58:40.322557  481208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 19:58:40.322676  481208 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 19:58:40.322820  481208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:58:40.337792  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:58:40.372596  481208 start.go:296] duration metric: took 159.984571ms for postStartSetup
	I0819 19:58:40.372650  481208 fix.go:56] duration metric: took 9.01375792s for fixHost
	I0819 19:58:40.372680  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:40.376119  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.376610  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:40.376639  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.376989  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:40.377312  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:40.377518  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:40.377676  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:40.377858  481208 main.go:141] libmachine: Using SSH client type: native
	I0819 19:58:40.378087  481208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0819 19:58:40.378105  481208 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:58:40.507374  481208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724097520.497291171
	
	I0819 19:58:40.507408  481208 fix.go:216] guest clock: 1724097520.497291171
	I0819 19:58:40.507418  481208 fix.go:229] Guest: 2024-08-19 19:58:40.497291171 +0000 UTC Remote: 2024-08-19 19:58:40.372656161 +0000 UTC m=+11.987187457 (delta=124.63501ms)
	I0819 19:58:40.507448  481208 fix.go:200] guest clock delta is within tolerance: 124.63501ms
	I0819 19:58:40.507456  481208 start.go:83] releasing machines lock for "pause-232147", held for 9.148597464s
	I0819 19:58:40.507935  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:40.508287  481208 main.go:141] libmachine: (pause-232147) Calling .GetIP
	I0819 19:58:40.513009  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.513574  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:40.513608  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.514007  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:40.514704  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:40.514942  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:40.515039  481208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:58:40.515086  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:40.515181  481208 ssh_runner.go:195] Run: cat /version.json
	I0819 19:58:40.515194  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:40.519584  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.519937  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.520469  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:40.520648  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.520677  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:40.520717  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.521387  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:40.521428  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:40.521678  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:40.521854  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:40.522053  481208 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/pause-232147/id_rsa Username:docker}
	I0819 19:58:40.522692  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:40.522868  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:40.523041  481208 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/pause-232147/id_rsa Username:docker}
	I0819 19:58:40.639459  481208 ssh_runner.go:195] Run: systemctl --version
	I0819 19:58:40.655718  481208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:58:40.840962  481208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:58:40.853198  481208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:58:40.853277  481208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:58:40.868796  481208 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 19:58:40.868827  481208 start.go:495] detecting cgroup driver to use...
	I0819 19:58:40.868899  481208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:58:40.900245  481208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:58:40.920400  481208 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:58:40.920463  481208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:58:40.938443  481208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:58:40.955169  481208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:58:41.146319  481208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:58:41.353301  481208 docker.go:233] disabling docker service ...
	I0819 19:58:41.353400  481208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:58:41.387771  481208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:58:41.412948  481208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:58:41.554623  481208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:58:41.861017  481208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:58:41.982330  481208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:58:42.076111  481208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:58:42.076186  481208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.109280  481208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:58:42.109371  481208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.145729  481208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.189101  481208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.240615  481208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:58:42.274951  481208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.289026  481208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.305852  481208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.326459  481208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:58:42.346162  481208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:58:42.356878  481208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:58:42.608928  481208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:58:43.169953  481208 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:58:43.170036  481208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:58:43.176235  481208 start.go:563] Will wait 60s for crictl version
	I0819 19:58:43.176307  481208 ssh_runner.go:195] Run: which crictl
	I0819 19:58:43.180628  481208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:58:43.216848  481208 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:58:43.216956  481208 ssh_runner.go:195] Run: crio --version
	I0819 19:58:43.253336  481208 ssh_runner.go:195] Run: crio --version
	I0819 19:58:43.293348  481208 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:58:43.294479  481208 main.go:141] libmachine: (pause-232147) Calling .GetIP
	I0819 19:58:43.297894  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:43.298326  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:43.298362  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:43.298669  481208 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 19:58:43.303270  481208 kubeadm.go:883] updating cluster {Name:pause-232147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:pause-232147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:58:43.303482  481208 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:58:43.303557  481208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:58:43.362917  481208 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:58:43.362952  481208 crio.go:433] Images already preloaded, skipping extraction
	I0819 19:58:43.363019  481208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:58:43.405430  481208 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:58:43.405463  481208 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:58:43.405474  481208 kubeadm.go:934] updating node { 192.168.50.125 8443 v1.31.0 crio true true} ...
	I0819 19:58:43.405617  481208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-232147 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:pause-232147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:58:43.405717  481208 ssh_runner.go:195] Run: crio config
	I0819 19:58:39.926453  481009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0819 19:58:39.931939  481009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0819 19:58:39.935237  481009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0819 19:58:39.936632  481009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 19:58:39.943545  481009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:58:39.958493  481009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0819 19:58:39.960752  481009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0819 19:58:40.076539  481009 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0819 19:58:40.076654  481009 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0819 19:58:40.076724  481009 ssh_runner.go:195] Run: which crictl
	I0819 19:58:40.117605  481009 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693" in container runtime
	I0819 19:58:40.117686  481009 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 19:58:40.117691  481009 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0819 19:58:40.117842  481009 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0819 19:58:40.117890  481009 ssh_runner.go:195] Run: which crictl
	I0819 19:58:40.117903  481009 ssh_runner.go:195] Run: which crictl
	I0819 19:58:40.158070  481009 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d" in container runtime
	I0819 19:58:40.158176  481009 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 19:58:40.158143  481009 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0819 19:58:40.158252  481009 ssh_runner.go:195] Run: which crictl
	I0819 19:58:40.158284  481009 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:58:40.158334  481009 ssh_runner.go:195] Run: which crictl
	I0819 19:58:40.177163  481009 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18" in container runtime
	I0819 19:58:40.177213  481009 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237" in container runtime
	I0819 19:58:40.177225  481009 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 19:58:40.177231  481009 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 19:58:40.177277  481009 ssh_runner.go:195] Run: which crictl
	I0819 19:58:40.177294  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0819 19:58:40.177312  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0819 19:58:40.177322  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0819 19:58:40.177277  481009 ssh_runner.go:195] Run: which crictl
	I0819 19:58:40.177363  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 19:58:40.177400  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:58:40.225534  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0819 19:58:40.300041  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0819 19:58:40.300118  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:58:40.300188  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0819 19:58:40.300264  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0819 19:58:40.300335  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 19:58:40.300376  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0819 19:58:40.300461  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0819 19:58:40.384846  481009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:58:40.454967  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0819 19:58:40.455109  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:58:40.455138  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0819 19:58:40.455267  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0819 19:58:40.455354  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0819 19:58:40.586167  481009 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.1
	I0819 19:58:40.586264  481009 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0819 19:58:40.586344  481009 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0819 19:58:40.613844  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0819 19:58:40.613929  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0819 19:58:40.613997  481009 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0819 19:58:40.614063  481009 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0819 19:58:40.614109  481009 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0819 19:58:40.614122  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (102146048 bytes)
	I0819 19:58:40.614165  481009 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 19:58:40.614205  481009 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0819 19:58:40.614235  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 19:58:40.723333  481009 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0819 19:58:40.723432  481009 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.1
	I0819 19:58:40.723472  481009 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.1
	I0819 19:58:40.723505  481009 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0819 19:58:40.723521  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (311296 bytes)
	I0819 19:58:40.723617  481009 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0819 19:58:40.723630  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
	I0819 19:58:40.801630  481009 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0819 19:58:40.801710  481009 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0819 19:58:41.142214  481009 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0819 19:58:41.142252  481009 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0819 19:58:41.142302  481009 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0819 19:58:41.852181  481009 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0819 19:58:41.852232  481009 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0819 19:58:41.852290  481009 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0819 19:58:44.313025  481009 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.460697847s)
	I0819 19:58:44.313059  481009 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0819 19:58:44.313108  481009 cache_images.go:92] duration metric: took 4.547781665s to LoadCachedImages
	W0819 19:58:44.313229  481009 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0819 19:58:44.313247  481009 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.24.1 crio true true} ...
	I0819 19:58:44.313370  481009 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=running-upgrade-814149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-814149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:58:44.313454  481009 ssh_runner.go:195] Run: crio config
	I0819 19:58:44.372482  481009 cni.go:84] Creating CNI manager for ""
	I0819 19:58:44.372506  481009 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:58:44.372515  481009 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:58:44.372534  481009 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-814149 NodeName:running-upgrade-814149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:58:44.372722  481009 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "running-upgrade-814149"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:58:44.372793  481009 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0819 19:58:44.383107  481009 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:58:44.383179  481009 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:58:44.393546  481009 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0819 19:58:44.410718  481009 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:58:44.428280  481009 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0819 19:58:44.462324  481009 ssh_runner.go:195] Run: grep 192.168.39.238	control-plane.minikube.internal$ /etc/hosts
	I0819 19:58:44.466740  481009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:58:44.643155  481009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:58:44.662762  481009 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149 for IP: 192.168.39.238
	I0819 19:58:44.662789  481009 certs.go:194] generating shared ca certs ...
	I0819 19:58:44.662810  481009 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:44.663001  481009 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 19:58:44.663057  481009 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 19:58:44.663067  481009 certs.go:256] generating profile certs ...
	I0819 19:58:44.663167  481009 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/client.key
	I0819 19:58:44.663195  481009 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.key.2307c246
	I0819 19:58:44.663211  481009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.crt.2307c246 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238]
	I0819 19:58:45.024832  481009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.crt.2307c246 ...
	I0819 19:58:45.024866  481009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.crt.2307c246: {Name:mkbd45db25145fbea141d679e4e3b5e94a91e521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:45.025040  481009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.key.2307c246 ...
	I0819 19:58:45.025052  481009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.key.2307c246: {Name:mk6380366927801ea722cd807662f92ced7d318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:45.025120  481009 certs.go:381] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.crt.2307c246 -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.crt
	I0819 19:58:45.025326  481009 certs.go:385] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.key.2307c246 -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.key
	I0819 19:58:45.025510  481009 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/proxy-client.key
	I0819 19:58:45.025692  481009 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 19:58:45.025729  481009 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 19:58:45.025744  481009 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 19:58:45.025770  481009 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:58:45.025792  481009 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:58:45.025816  481009 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 19:58:45.025886  481009 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:58:45.027090  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:58:45.061434  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:58:45.094381  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:58:45.133397  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 19:58:45.173530  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 19:58:45.206716  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:58:45.241999  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:58:45.274886  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:58:45.308813  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:58:45.354057  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 19:58:45.381606  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 19:58:45.410472  481009 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:58:45.428456  481009 ssh_runner.go:195] Run: openssl version
	I0819 19:58:45.435083  481009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:58:45.445993  481009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:58:45.451574  481009 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:58:45.451671  481009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:58:45.458486  481009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:58:45.470536  481009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 19:58:45.482076  481009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 19:58:45.487234  481009 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 19:58:45.487314  481009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 19:58:45.493755  481009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 19:58:45.505043  481009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 19:58:45.517675  481009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 19:58:45.524343  481009 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 19:58:45.524415  481009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 19:58:45.531883  481009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:58:45.542831  481009 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:58:45.548838  481009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:58:45.556398  481009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:58:45.562545  481009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:58:45.568814  481009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:58:45.574945  481009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:58:45.581335  481009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:58:45.587672  481009 kubeadm.go:392] StartCluster: {Name:running-upgrade-814149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running
-upgrade-814149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 19:58:45.587768  481009 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:58:45.587824  481009 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:58:45.619870  481009 cri.go:89] found id: "13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d"
	I0819 19:58:45.619895  481009 cri.go:89] found id: "8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3"
	I0819 19:58:45.619900  481009 cri.go:89] found id: "7862e0abf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5"
	I0819 19:58:45.619904  481009 cri.go:89] found id: "8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e"
	I0819 19:58:45.619908  481009 cri.go:89] found id: "a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7"
	I0819 19:58:45.619913  481009 cri.go:89] found id: "2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5"
	I0819 19:58:45.619917  481009 cri.go:89] found id: "686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973"
	I0819 19:58:45.619921  481009 cri.go:89] found id: "5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7"
	I0819 19:58:45.619926  481009 cri.go:89] found id: ""
	I0819 19:58:45.619983  481009 ssh_runner.go:195] Run: sudo runc list -f json
	I0819 19:58:45.651618  481009 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d","pid":2128,"status":"running","bundle":"/run/containers/storage/overlay-containers/13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d/userdata","rootfs":"/var/lib/containers/storage/overlay/747de1858df7669c0df8b6ae7b583907346ff010ee1c242ca0865f7c60e2bbfd/merged","created":"2024-08-19T19:58:32.340621772Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b2097f03","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b2097f03\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.te
rminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-19T19:58:32.036995856Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d225c6b3-f05c-4157-94f0-d78926d01235\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_d225c6b3-f05c-4157-94f0-d78926d01235/storage-provisioner/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provi
sioner\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/747de1858df7669c0df8b6ae7b583907346ff010ee1c242ca0865f7c60e2bbfd/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_d225c6b3-f05c-4157-94f0-d78926d01235_1","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_d225c6b3-f05c-4157-94f0-d78926d01235_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d225c6b3-f05c-4157-94f0
-d78926d01235/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d225c6b3-f05c-4157-94f0-d78926d01235/containers/storage-provisioner/032ea17c\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d225c6b3-f05c-4157-94f0-d78926d01235/volumes/kubernetes.io~projected/kube-api-access-kgf5m\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d225c6b3-f05c-4157-94f0-d78926d01235","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\"
:\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2024-08-19T19:58:28.728194527Z","kubernetes.io/config.source":"api","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5","pid":1088,"status":"running","bundle":"/run/containers/storage/overlay-containers/2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5/userdata","rootfs":"/var/lib/containers/storage/overlay/c8bad195526a65f3f1628f4feb928e9c8c35efd0afd12ad25394
32d3e2c944e9/merged","created":"2024-08-19T19:57:57.082911826Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c2b4c8cb","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c2b4c8cb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-19T19:57:57.010971681Z","io.kubernetes.cri-o.Image":"e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693","io.kubernetes.cri-o.Im
ageName":"k8s.gcr.io/kube-apiserver:v1.24.1","io.kubernetes.cri-o.ImageRef":"e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-running-upgrade-814149\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"68cd9fb77d32dc11dd8265589f1f254e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-running-upgrade-814149_68cd9fb77d32dc11dd8265589f1f254e/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c8bad195526a65f3f1628f4feb928e9c8c35efd0afd12ad2539432d3e2c944e9/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-running-upgrade-814149_kube-system_68cd9fb77d32dc11dd8265589f1f254e_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1
528e23487704b0efe1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-running-upgrade-814149_kube-system_68cd9fb77d32dc11dd8265589f1f254e_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/68cd9fb77d32dc11dd8265589f1f254e/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/68cd9fb77d32dc11dd8265589f1f254e/containers/kube-apiserver/39a60df5\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/c
erts\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-running-upgrade-814149","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"68cd9fb77d32dc11dd8265589f1f254e","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.238:8443","kubernetes.io/config.hash":"68cd9fb77d32dc11dd8265589f1f254e","kubernetes.io/config.seen":"2024-08-19T19:57:43.433635108Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7","pid":948,"status":"running","bundle":"/run/containers/storage/overlay-containers/5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7/userdata","roo
tfs":"/var/lib/containers/storage/overlay/62883e2625ad60465677e8171169a24766c8b59407f63953ee2eeb6eef68d2b2/merged","created":"2024-08-19T19:57:45.722805472Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"eff52b7d","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"eff52b7d\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-19T19:57:45.684026186Z","io.kubernetes.cri-o.Ima
ge":"18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.24.1","io.kubernetes.cri-o.ImageRef":"18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-running-upgrade-814149\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"61e3b0a6e8f83345f590745946a230a3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-running-upgrade-814149_61e3b0a6e8f83345f590745946a230a3/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/62883e2625ad60465677e8171169a24766c8b59407f63953ee2eeb6eef68d2b2/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-running-upgrade-814149_kube-system_61e3b0a6e8f83345f590745946a230a3_0","io.kubernetes.cri-o.ResolvPath":
"/var/run/containers/storage/overlay-containers/9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-running-upgrade-814149_kube-system_61e3b0a6e8f83345f590745946a230a3_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/61e3b0a6e8f83345f590745946a230a3/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/61e3b0a6e8f83345f590745946a230a3/containers/kube-scheduler/0c94193c\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"ku
be-scheduler-running-upgrade-814149","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"61e3b0a6e8f83345f590745946a230a3","kubernetes.io/config.hash":"61e3b0a6e8f83345f590745946a230a3","kubernetes.io/config.seen":"2024-08-19T19:57:43.433638577Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b","pid":1458,"status":"running","bundle":"/run/containers/storage/overlay-containers/5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b/userdata","rootfs":"/var/lib/containers/storage/overlay/62f5158087b4eeb2524b2552be8890bb391b9912f809a50e87ce6d7cca127ec9/merged","created":"2024-08-19T19:58:29.243218254Z","annotations":{"addonmanage
r.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":
\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\",\"kubernetes.io/config.seen\":\"2024-08-19T19:58:28.728194527Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-podd225c6b3_f05c_4157_94f0_d78926d01235.slice","io.kubernetes.cri-o.ContainerID":"5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_d225c6b3-f05c-4157-94f0-d78926d01235_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-19T19:58:29.110455571Z","io.kubernetes.cri-o.HostName":"running-upgrade-814149","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"storage-provisioner\
",\"integration-test\":\"storage-provisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.pod.uid\":\"d225c6b3-f05c-4157-94f0-d78926d01235\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_d225c6b3-f05c-4157-94f0-d78926d01235/5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"storage-provisioner\",\"UID\":\"d225c6b3-f05c-4157-94f0-d78926d01235\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/62f5158087b4eeb2524b2552be8890bb391b9912f809a50e87ce6d7cca127ec9/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_d225c6b3-f05c-4157-94f0-d78926d01235_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.Pri
vilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"d225c6b3-f05c-4157-94f0-d78926d01235","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{
\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2024-08-19T19:58:28.728194527Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973","pid":1021,"status":"running","bundle":"/run/containers/storage/overlay-containers/686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973/userdata","rootfs":"/var/lib/containers/storage/overlay/af264d40d4e180612dd1ecc6ca5f3cd642f97e74f29266df9a35739a8de63220/merged","created":"2024-08-19T19:57:56.072440943Z","annotations":{"io.container.manager":"cri-
o","io.kubernetes.container.hash":"1c682979","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1c682979\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-19T19:57:56.006160972Z","io.kubernetes.cri-o.Image":"b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.24.1","io.kubernetes.cri-o.ImageRef":"b4ea7e648530
d171b38f67305e22caf49f9d968d71c558e663709b805076538d","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-running-upgrade-814149\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cd3a71b3f87971114ebb42fa0c1c70bb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-running-upgrade-814149_cd3a71b3f87971114ebb42fa0c1c70bb/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/af264d40d4e180612dd1ecc6ca5f3cd642f97e74f29266df9a35739a8de63220/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-running-upgrade-814149_kube-system_cd3a71b3f87971114ebb42fa0c1c70bb_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a/userdat
a/resolv.conf","io.kubernetes.cri-o.SandboxID":"c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-running-upgrade-814149_kube-system_cd3a71b3f87971114ebb42fa0c1c70bb_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cd3a71b3f87971114ebb42fa0c1c70bb/containers/kube-controller-manager/6eb1d355\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cd3a71b3f87971114ebb42fa0c1c70bb/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true}
,{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-running-upgrade-814149","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cd3a71b3f87971114ebb42fa0c1c70bb","kubernetes.io/config.hash":"cd3a71b3f87971114ebb42fa0c1c70bb","kubernetes.io/config.seen":"2024-08-19T19:57:43.433637214Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7862e0a
bf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5","pid":1601,"status":"running","bundle":"/run/containers/storage/overlay-containers/7862e0abf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5/userdata","rootfs":"/var/lib/containers/storage/overlay/f1e271646e3f1d98222133cdf619de06257099b07b9f8aaf9cb13fb7d063c00a/merged","created":"2024-08-19T19:58:30.121593914Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d36c3c1c","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d36c3c1c\",\"io.kubernetes.container.ports\
":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"7862e0abf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-19T19:58:30.030231211Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri-o.ImageRef":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","io.kuberne
tes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-6d4b75cb6d-n6bjb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2c398c01-e3a8-4962-905b-8e22c52a6f6d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6d4b75cb6d-n6bjb_2c398c01-e3a8-4962-905b-8e22c52a6f6d/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f1e271646e3f1d98222133cdf619de06257099b07b9f8aaf9cb13fb7d063c00a/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-6d4b75cb6d-n6bjb_kube-system_2c398c01-e3a8-4962-905b-8e22c52a6f6d_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6","io.kubernetes.cri-o.SandboxName":"k8s_coredns-6d4b75cb6d-n6bjb_kube-
system_2c398c01-e3a8-4962-905b-8e22c52a6f6d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/2c398c01-e3a8-4962-905b-8e22c52a6f6d/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2c398c01-e3a8-4962-905b-8e22c52a6f6d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2c398c01-e3a8-4962-905b-8e22c52a6f6d/containers/coredns/aabcb85d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/2c398c01-e3a8-4962-905b-8e22c52a6f6d/volumes/kubernetes.io~projected/kube-api-access-d8v9w\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-6d4b75cb6d-n6bjb","io.kubernetes.pod.namespa
ce":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2c398c01-e3a8-4962-905b-8e22c52a6f6d","kubernetes.io/config.seen":"2024-08-19T19:58:28.720999136Z","kubernetes.io/config.source":"api","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754","pid":1131,"status":"running","bundle":"/run/containers/storage/overlay-containers/86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754/userdata","rootfs":"/var/lib/containers/storage/overlay/6df4f8f8d61978d5ef45fc33e0861fdc25c8e33441809bcbe666bd8f4b39127e/merged","created":"2024-08-19T19:57:58.636349563Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes
.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"e725c1cb4074b8cd283bfdc2d5a3bcbc\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.238:2379\",\"kubernetes.io/config.seen\":\"2024-08-19T19:57:43.433588549Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pode725c1cb4074b8cd283bfdc2d5a3bcbc.slice","io.kubernetes.cri-o.ContainerID":"86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-running-upgrade-814149_kube-system_e725c1cb4074b8cd283bfdc2d5a3bcbc_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-19T19:57:58.566293016Z","io.kubernetes.cri-o.HostName":"running-upgrade-814149","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"
etcd-running-upgrade-814149","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"e725c1cb4074b8cd283bfdc2d5a3bcbc\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-running-upgrade-814149\",\"tier\":\"control-plane\",\"component\":\"etcd\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-running-upgrade-814149_e725c1cb4074b8cd283bfdc2d5a3bcbc/86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"etcd-running-upgrade-814149\",\"UID\":\"e725c1cb4074b8cd283bfdc2d5a3bcbc\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6df4f8f8d61978d5ef45fc33e0861fdc25c8e33441809bcbe666bd8f4b39127e/merged","io.kubernetes.cri-o.Name":"k8s_etcd-running-upgrade-814149_kube-system_e725c1cb4074b8cd283bfdc2d5a3bcbc_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\
":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754/userdata/shm","io.kubernetes.pod.name":"etcd-running-upgrade-814149","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e725c1cb4074b8cd283bfdc2d5a3bcbc","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.238:2379","kubernetes.io/config.hash":"e725c1cb4074b8cd283bfdc2d5a3bcbc","kubernetes.io/config.seen":"2024-08-19T19:57:43.433588549Z","kubernetes.io/config.source":"file
","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e","pid":1531,"status":"running","bundle":"/run/containers/storage/overlay-containers/8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e/userdata","rootfs":"/var/lib/containers/storage/overlay/d65bb6f8a205a0256557aeb49d3972f1646be7a404ef0d98ad3ba636c0cd6e9d/merged","created":"2024-08-19T19:58:29.906538609Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"84df7c1c","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"84df7c1c\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-lo
g\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-19T19:58:29.647397936Z","io.kubernetes.cri-o.Image":"beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.24.1","io.kubernetes.cri-o.ImageRef":"beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-zlldb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"72574efa-cee8-4763-bf3d-424af3ae1c6c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-zlldb_72574efa-cee8-4763-bf3d-424af3ae1c6c/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io
.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d65bb6f8a205a0256557aeb49d3972f1646be7a404ef0d98ad3ba636c0cd6e9d/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-zlldb_kube-system_72574efa-cee8-4763-bf3d-424af3ae1c6c_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-zlldb_kube-system_72574efa-cee8-4763-bf3d-424af3ae1c6c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc
/hosts\",\"host_path\":\"/var/lib/kubelet/pods/72574efa-cee8-4763-bf3d-424af3ae1c6c/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/72574efa-cee8-4763-bf3d-424af3ae1c6c/containers/kube-proxy/ecc4073c\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/72574efa-cee8-4763-bf3d-424af3ae1c6c/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/72574efa-cee8-4763-bf3d-424af3ae1c6c/volumes/kubernetes.io~projected/kube-api-access-j5fq9\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-zlldb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"72574efa-cee8-4763-bf3d-424af3ae1c6c","kubernetes.io/config.seen":"2024-08-19T19:58:28.317453402Z","kubernetes.io/config.source":"api","org.systemd.property.After":"['cr
io.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3/userdata","rootfs":"/var/lib/containers/storage/overlay/c91e928118b193c0427669ec7fb0293846d75f165011a0da0f401fc1c20ecd77/merged","created":"2024-08-19T19:58:30.31801087Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b2097f03","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b2097f03\",\"io.kubernetes.co
ntainer.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-19T19:58:30.151267878Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d225c6b3-f05c-4157-94f0-d78926d01235\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-
provisioner_d225c6b3-f05c-4157-94f0-d78926d01235/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c91e928118b193c0427669ec7fb0293846d75f165011a0da0f401fc1c20ecd77/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_d225c6b3-f05c-4157-94f0-d78926d01235_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_d225c6b3-f05c-4157-94f0-d78926d01235_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp
\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d225c6b3-f05c-4157-94f0-d78926d01235/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d225c6b3-f05c-4157-94f0-d78926d01235/containers/storage-provisioner/781d401c\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d225c6b3-f05c-4157-94f0-d78926d01235/volumes/kubernetes.io~projected/kube-api-access-kgf5m\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d225c6b3-f05c-4157-94f0-d78926d01235","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-p
rovisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2024-08-19T19:58:28.728194527Z","kubernetes.io/config.source":"api","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7","pid":920,"status":"running","bundle":"/run/containers/storage/overlay-containers/9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0
fc68408f7/userdata","rootfs":"/var/lib/containers/storage/overlay/3142b172446dbd5d3e8e287c1f0f94f2f35c301f610c8eaa26352101226f77a7/merged","created":"2024-08-19T19:57:45.427297119Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"61e3b0a6e8f83345f590745946a230a3\",\"kubernetes.io/config.seen\":\"2024-08-19T19:57:43.433638577Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod61e3b0a6e8f83345f590745946a230a3.slice","io.kubernetes.cri-o.ContainerID":"9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-running-upgrade-814149_kube-system_61e3b0a6e8f83345f590745946a230a3_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-19T19:57:45.377086179Z","io.kubernetes.cri-o.HostName":"running-upgrade-814149","io.kubernetes.cri-o.Hos
tNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-running-upgrade-814149","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-scheduler-running-upgrade-814149\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"61e3b0a6e8f83345f590745946a230a3\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-running-upgrade-814149_61e3b0a6e8f83345f590745946a230a3/9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"kube-scheduler-running-upgrade-814149\",\"UID\":\"61e3b0a6e8f83345f590745946a230a3\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint
":"/var/lib/containers/storage/overlay/3142b172446dbd5d3e8e287c1f0f94f2f35c301f610c8eaa26352101226f77a7/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-running-upgrade-814149_kube-system_61e3b0a6e8f83345f590745946a230a3_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-running-upgrade-814149","io.ku
bernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"61e3b0a6e8f83345f590745946a230a3","kubernetes.io/config.hash":"61e3b0a6e8f83345f590745946a230a3","kubernetes.io/config.seen":"2024-08-19T19:57:43.433638577Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7","pid":1162,"status":"running","bundle":"/run/containers/storage/overlay-containers/a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7/userdata","rootfs":"/var/lib/containers/storage/overlay/4168dbdf4384c147d25af8a80b3ec191a1a89b3842fc58ad4730009178a75c5c/merged","created":"2024-08-19T19:57:58.950053047Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"841356c0","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kuber
netes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"841356c0\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-19T19:57:58.89698761Z","io.kubernetes.cri-o.Image":"aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri-o.ImageRef":"aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-running-upgrade-814149\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kuberne
tes.pod.uid\":\"e725c1cb4074b8cd283bfdc2d5a3bcbc\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-running-upgrade-814149_e725c1cb4074b8cd283bfdc2d5a3bcbc/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4168dbdf4384c147d25af8a80b3ec191a1a89b3842fc58ad4730009178a75c5c/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-running-upgrade-814149_kube-system_e725c1cb4074b8cd283bfdc2d5a3bcbc_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754","io.kubernetes.cri-o.SandboxName":"k8s_etcd-running-upgrade-814149_kube-system_e725c1cb4074b8cd283bfdc2d5a3bcbc_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.
cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e725c1cb4074b8cd283bfdc2d5a3bcbc/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e725c1cb4074b8cd283bfdc2d5a3bcbc/containers/etcd/ee4a419d\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-running-upgrade-814149","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e725c1cb4074b8cd283bfdc2d5a3bcbc","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.238:2379","kubernetes.io/config.hash":"e725c1cb4074b8cd283bfdc2d5a3bcbc","kubernetes.io/config.seen":"2024-08-19T19:57:43.433588549Z","kubernetes.io/config.source":"
file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6","pid":1485,"status":"running","bundle":"/run/containers/storage/overlay-containers/bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6/userdata","rootfs":"/var/lib/containers/storage/overlay/d21c7c22745a2ae18b471a67bb91e1406bacbdb95449f38a45bd1675b0f46f96/merged","created":"2024-08-19T19:58:29.360348145Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-08-19T19:58:28.720999136Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"72:d2:ef:75:fd:5c\"},{\"name\":\"veth2
8df23a7\",\"mac\":\"92:c2:c0:ab:5e:0d\"},{\"name\":\"eth0\",\"mac\":\"da:d2:9d:44:f4:36\",\"sandbox\":\"/var/run/netns/b91d5625-7a8d-4638-946c-cc93cb9fe609\"}],\"ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.2/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod2c398c01_e3a8_4962_905b_8e22c52a6f6d.slice","io.kubernetes.cri-o.ContainerID":"bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-6d4b75cb6d-n6bjb_kube-system_2c398c01-e3a8-4962-905b-8e22c52a6f6d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-19T19:58:29.080224201Z","io.kubernetes.cri-o.HostName":"coredns-6d4b75cb6d-n6bjb","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6/userdat
a/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-6d4b75cb6d-n6bjb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"2c398c01-e3a8-4962-905b-8e22c52a6f6d\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-6d4b75cb6d-n6bjb\",\"k8s-app\":\"kube-dns\",\"pod-template-hash\":\"6d4b75cb6d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6d4b75cb6d-n6bjb_2c398c01-e3a8-4962-905b-8e22c52a6f6d/bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"coredns-6d4b75cb6d-n6bjb\",\"UID\":\"2c398c01-e3a8-4962-905b-8e22c52a6f6d\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d21c7c22745a2ae18b471a67bb91e1406bacbdb95449f38a45bd1675b0f46f96/merged","io.kubernetes.cri-o.Name":"k8s_coredns-6d4b75cb6d-n6bjb_kube-system_2c398c01-e3a8-4962-905b-8e22c
52a6f6d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6/userdata/shm","io.kubernetes.pod.name":"coredns-6d4b75cb6d-n6bjb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"2c398c01-e3a8-4962-905b-8e22c52a6f6d","k8s-app":"kube-dns","kubernetes.io/config.seen":"2024-08-19T19:58:28.720999136Z","kubernetes.io/config.source":"api","org.systemd.property.Coll
ectMode":"'inactive-or-failed'","pod-template-hash":"6d4b75cb6d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a","pid":991,"status":"running","bundle":"/run/containers/storage/overlay-containers/c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a/userdata","rootfs":"/var/lib/containers/storage/overlay/61fe3381ef9f42f48a038466115de13d8a1a1370e99f2c5ffe5515b252647fab/merged","created":"2024-08-19T19:57:55.63514221Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-08-19T19:57:43.433637214Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"cd3a71b3f87971114ebb42fa0c1c70bb\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podcd3a71b3f87971114ebb42fa0c1c70bb.slice","io.kubernetes.cri-o.ContainerID":"c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f3
76a","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-running-upgrade-814149_kube-system_cd3a71b3f87971114ebb42fa0c1c70bb_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-19T19:57:55.573512869Z","io.kubernetes.cri-o.HostName":"running-upgrade-814149","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-running-upgrade-814149","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"cd3a71b3f87971114ebb42fa0c1c70bb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-running-upgrade-814149\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pod
s/kube-system_kube-controller-manager-running-upgrade-814149_cd3a71b3f87971114ebb42fa0c1c70bb/c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"kube-controller-manager-running-upgrade-814149\",\"UID\":\"cd3a71b3f87971114ebb42fa0c1c70bb\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/61fe3381ef9f42f48a038466115de13d8a1a1370e99f2c5ffe5515b252647fab/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-running-upgrade-814149_kube-system_cd3a71b3f87971114ebb42fa0c1c70bb_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"
","io.kubernetes.cri-o.SandboxID":"c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-running-upgrade-814149","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"cd3a71b3f87971114ebb42fa0c1c70bb","kubernetes.io/config.hash":"cd3a71b3f87971114ebb42fa0c1c70bb","kubernetes.io/config.seen":"2024-08-19T19:57:43.433637214Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88","pid":1409,"status":"running","bundle":"/run/containers/storage/overlay-containers/c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88/userdata","rootfs":"/var/
lib/containers/storage/overlay/8f58b2890809c3536175082342a637ddf60d90ca3602135a55a7edbc2909eddf/merged","created":"2024-08-19T19:58:29.027292854Z","annotations":{"controller-revision-hash":"58bf5dfbd7","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2024-08-19T19:58:28.317453402Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod72574efa_cee8_4763_bf3d_424af3ae1c6c.slice","io.kubernetes.cri-o.ContainerID":"c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-zlldb_kube-system_72574efa-cee8-4763-bf3d-424af3ae1c6c_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-19T19:58:28.956451829Z","io.kubernetes.cri-o.HostName":"running-upgrade-814149","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/c683
a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-zlldb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-zlldb\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"io.kubernetes.container.name\":\"POD\",\"controller-revision-hash\":\"58bf5dfbd7\",\"io.kubernetes.pod.uid\":\"72574efa-cee8-4763-bf3d-424af3ae1c6c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-zlldb_72574efa-cee8-4763-bf3d-424af3ae1c6c/c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"kube-proxy-zlldb\",\"UID\":\"72574efa-cee8-4763-bf3d-424af3ae1c6c\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8f58b2890809c3536175082342a637ddf60d90ca3602135a55a7edbc2909eddf/merged","io.kubernetes.cri-
o.Name":"k8s_kube-proxy-zlldb_kube-system_72574efa-cee8-4763-bf3d-424af3ae1c6c_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88/userdata/shm","io.kubernetes.pod.name":"kube-proxy-zlldb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"72574efa-cee8-4763-bf3d-424af3ae1c6c","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2024-08-19T19:58
:28.317453402Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1","pid":1041,"status":"running","bundle":"/run/containers/storage/overlay-containers/f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1/userdata","rootfs":"/var/lib/containers/storage/overlay/9928428c67fa58437780935cbf8307a9da0bccb6412a49de14185d6f27f2d036/merged","created":"2024-08-19T19:57:56.694886375Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-08-19T19:57:43.433635108Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"68cd9fb77d32dc11dd8265589f1f254e\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.39.238:8443\"}","io.kubernetes.cri-o.CgroupParent"
:"kubepods-burstable-pod68cd9fb77d32dc11dd8265589f1f254e.slice","io.kubernetes.cri-o.ContainerID":"f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-running-upgrade-814149_kube-system_68cd9fb77d32dc11dd8265589f1f254e_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-19T19:57:56.565866924Z","io.kubernetes.cri-o.HostName":"running-upgrade-814149","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-running-upgrade-814149","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"68cd9fb77d32dc11dd8265589f1f254e\",\"io.kubernetes.pod.namespace\":\"kube
-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-running-upgrade-814149\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-running-upgrade-814149_68cd9fb77d32dc11dd8265589f1f254e/f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"kube-apiserver-running-upgrade-814149\",\"UID\":\"68cd9fb77d32dc11dd8265589f1f254e\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9928428c67fa58437780935cbf8307a9da0bccb6412a49de14185d6f27f2d036/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-running-upgrade-814149_kube-system_68cd9fb77d32dc11dd8265589f1f254e_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f67f0a71460dceede0ce2edf05f8
e5869e8f39d74624b1528e23487704b0efe1/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-running-upgrade-814149","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"68cd9fb77d32dc11dd8265589f1f254e","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.238:8443","kubernetes.io/config.hash":"68cd9fb77d32dc11dd8265589f1f254e","kubernetes.io/config.seen":"2024-08-19T19:57:43.433635108Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0819 19:58:45.652323  481009 cri.go:126] list returned 15 containers
	I0819 19:58:45.652344  481009 cri.go:129] container: {ID:13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d Status:running}
	I0819 19:58:45.652364  481009 cri.go:135] skipping {13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d running}: state = "running", want "paused"
	I0819 19:58:45.652377  481009 cri.go:129] container: {ID:2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5 Status:running}
	I0819 19:58:45.652388  481009 cri.go:135] skipping {2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5 running}: state = "running", want "paused"
	I0819 19:58:45.652397  481009 cri.go:129] container: {ID:5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7 Status:running}
	I0819 19:58:45.652406  481009 cri.go:135] skipping {5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7 running}: state = "running", want "paused"
	I0819 19:58:45.652414  481009 cri.go:129] container: {ID:5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b Status:running}
	I0819 19:58:45.652423  481009 cri.go:131] skipping 5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b - not in ps
	I0819 19:58:45.652430  481009 cri.go:129] container: {ID:686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973 Status:running}
	I0819 19:58:45.652442  481009 cri.go:135] skipping {686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973 running}: state = "running", want "paused"
	I0819 19:58:45.652451  481009 cri.go:129] container: {ID:7862e0abf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5 Status:running}
	I0819 19:58:45.652460  481009 cri.go:135] skipping {7862e0abf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5 running}: state = "running", want "paused"
	I0819 19:58:45.652469  481009 cri.go:129] container: {ID:86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754 Status:running}
	I0819 19:58:45.652475  481009 cri.go:131] skipping 86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754 - not in ps
	I0819 19:58:45.652481  481009 cri.go:129] container: {ID:8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e Status:running}
	I0819 19:58:45.652490  481009 cri.go:135] skipping {8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e running}: state = "running", want "paused"
	I0819 19:58:45.652500  481009 cri.go:129] container: {ID:8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3 Status:stopped}
	I0819 19:58:45.652512  481009 cri.go:135] skipping {8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3 stopped}: state = "stopped", want "paused"
	I0819 19:58:45.652518  481009 cri.go:129] container: {ID:9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7 Status:running}
	I0819 19:58:45.652526  481009 cri.go:131] skipping 9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7 - not in ps
	I0819 19:58:45.652534  481009 cri.go:129] container: {ID:a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7 Status:running}
	I0819 19:58:45.652542  481009 cri.go:135] skipping {a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7 running}: state = "running", want "paused"
	I0819 19:58:45.652551  481009 cri.go:129] container: {ID:bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6 Status:running}
	I0819 19:58:45.652559  481009 cri.go:131] skipping bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6 - not in ps
	I0819 19:58:45.652566  481009 cri.go:129] container: {ID:c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a Status:running}
	I0819 19:58:45.652571  481009 cri.go:131] skipping c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a - not in ps
	I0819 19:58:45.652590  481009 cri.go:129] container: {ID:c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88 Status:running}
	I0819 19:58:45.652601  481009 cri.go:131] skipping c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88 - not in ps
	I0819 19:58:45.652607  481009 cri.go:129] container: {ID:f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1 Status:running}
	I0819 19:58:45.652615  481009 cri.go:131] skipping f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1 - not in ps
	I0819 19:58:45.652672  481009 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0819 19:58:45.662436  481009 kubeadm.go:405] apiserver tunnel failed: apiserver port not set
	I0819 19:58:45.662468  481009 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:58:45.662475  481009 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:58:45.662532  481009 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:58:45.671153  481009 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:58:45.671805  481009 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-814149" does not appear in /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:58:45.672136  481009 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-430949/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-814149" cluster setting kubeconfig missing "running-upgrade-814149" context setting]
	I0819 19:58:45.672702  481009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/kubeconfig: {Name:mk1ae3aa741bb2460fc0d48f80867b991b4a0677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:45.673609  481009 kapi.go:59] client config for running-upgrade-814149: &rest.Config{Host:"https://192.168.39.238:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/client.key", CAFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 19:58:45.674294  481009 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:58:45.683735  481009 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/crio/crio.sock
	+  criSocket: unix:///var/run/crio/crio.sock
	   name: "running-upgrade-814149"
	   kubeletExtraArgs:
	     node-ip: 192.168.39.238
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0819 19:58:45.683772  481009 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:58:45.683790  481009 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:58:45.683852  481009 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:58:45.712316  481009 cri.go:89] found id: "13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d"
	I0819 19:58:45.712361  481009 cri.go:89] found id: "8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3"
	I0819 19:58:45.712369  481009 cri.go:89] found id: "7862e0abf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5"
	I0819 19:58:45.712374  481009 cri.go:89] found id: "8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e"
	I0819 19:58:45.712378  481009 cri.go:89] found id: "a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7"
	I0819 19:58:45.712383  481009 cri.go:89] found id: "2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5"
	I0819 19:58:45.712387  481009 cri.go:89] found id: "686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973"
	I0819 19:58:45.712392  481009 cri.go:89] found id: "5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7"
	I0819 19:58:45.712395  481009 cri.go:89] found id: ""
	I0819 19:58:45.712403  481009 cri.go:252] Stopping containers: [13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d 8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3 7862e0abf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5 8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7 2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5 686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973 5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7]
	I0819 19:58:45.712478  481009 ssh_runner.go:195] Run: which crictl
	I0819 19:58:45.716712  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d 8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3 7862e0abf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5 8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7 2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5 686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973 5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7
	W0819 19:58:45.803175  481009 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d 8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3 7862e0abf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5 8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7 2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5 686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973 5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-19T19:58:45Z" level=fatal msg="stopping the container \"13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d\": rpc error: code = Unknown desc = failed to unmount container 13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d: layer not known"
	I0819 19:58:45.803253  481009 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:58:45.839075  481009 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:58:45.849357  481009 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Aug 19 19:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Aug 19 19:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 19 19:58 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Aug 19 19:57 /etc/kubernetes/scheduler.conf
	
	I0819 19:58:45.849450  481009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I0819 19:58:45.858228  481009 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:58:45.858314  481009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:58:45.868251  481009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I0819 19:58:45.876567  481009 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:58:45.876650  481009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:58:45.885208  481009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I0819 19:58:45.893515  481009 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:58:45.893592  481009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:58:45.902495  481009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I0819 19:58:45.913302  481009 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:58:45.913382  481009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:58:45.934567  481009 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:58:45.942966  481009 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:58:46.058557  481009 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:58:46.885279  481009 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:58:47.279146  481009 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:58:47.362409  481009 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:58:47.483656  481009 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:58:47.483755  481009 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:58:47.522541  481009 api_server.go:72] duration metric: took 38.898531ms to wait for apiserver process to appear ...
	I0819 19:58:47.522633  481009 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:58:47.522669  481009 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0819 19:58:47.530893  481009 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I0819 19:58:47.539720  481009 api_server.go:141] control plane version: v1.24.1
	I0819 19:58:47.539757  481009 api_server.go:131] duration metric: took 17.104951ms to wait for apiserver health ...
	I0819 19:58:47.539770  481009 cni.go:84] Creating CNI manager for ""
	I0819 19:58:47.539780  481009 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:58:47.541993  481009 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:58:47.543470  481009 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:58:47.555204  481009 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:58:47.574873  481009 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:58:47.574978  481009 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 19:58:47.575010  481009 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 19:58:47.593525  481009 system_pods.go:59] 7 kube-system pods found
	I0819 19:58:47.593580  481009 system_pods.go:61] "coredns-6d4b75cb6d-n6bjb" [2c398c01-e3a8-4962-905b-8e22c52a6f6d] Running
	I0819 19:58:47.593590  481009 system_pods.go:61] "etcd-running-upgrade-814149" [619ee562-7fb0-4b5c-89aa-2b10d1050bd6] Running
	I0819 19:58:47.593596  481009 system_pods.go:61] "kube-apiserver-running-upgrade-814149" [16924e68-c47e-42ab-981c-9f6c64a35af6] Running
	I0819 19:58:47.593604  481009 system_pods.go:61] "kube-controller-manager-running-upgrade-814149" [c84e3c23-4505-452b-82bb-027c958dad19] Running
	I0819 19:58:47.593611  481009 system_pods.go:61] "kube-proxy-zlldb" [72574efa-cee8-4763-bf3d-424af3ae1c6c] Running
	I0819 19:58:47.593617  481009 system_pods.go:61] "kube-scheduler-running-upgrade-814149" [faeb4f9e-7ed2-465f-a755-9e820342a1c0] Running
	I0819 19:58:47.593622  481009 system_pods.go:61] "storage-provisioner" [d225c6b3-f05c-4157-94f0-d78926d01235] Running
	I0819 19:58:47.593634  481009 system_pods.go:74] duration metric: took 18.733761ms to wait for pod list to return data ...
	I0819 19:58:47.593646  481009 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:58:47.597456  481009 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0819 19:58:47.597506  481009 node_conditions.go:123] node cpu capacity is 2
	I0819 19:58:47.597524  481009 node_conditions.go:105] duration metric: took 3.868635ms to run NodePressure ...
	I0819 19:58:47.597552  481009 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:58:47.930040  481009 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:58:47.947013  481009 ops.go:34] apiserver oom_adj: -16
	I0819 19:58:47.947052  481009 kubeadm.go:597] duration metric: took 2.284569376s to restartPrimaryControlPlane
	I0819 19:58:47.947066  481009 kubeadm.go:394] duration metric: took 2.359404698s to StartCluster
	I0819 19:58:47.947121  481009 settings.go:142] acquiring lock: {Name:mkc71def6be9966e5f008a22161bf3ed26f482b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:47.947241  481009 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:58:47.948464  481009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/kubeconfig: {Name:mk1ae3aa741bb2460fc0d48f80867b991b4a0677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:47.948749  481009 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:58:47.948909  481009 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:58:47.948976  481009 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-814149"
	I0819 19:58:47.948999  481009 config.go:182] Loaded profile config "running-upgrade-814149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0819 19:58:47.949010  481009 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-814149"
	I0819 19:58:47.949031  481009 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-814149"
	I0819 19:58:47.949005  481009 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-814149"
	W0819 19:58:47.949054  481009 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:58:47.949079  481009 host.go:66] Checking if "running-upgrade-814149" exists ...
	I0819 19:58:47.949428  481009 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:47.949456  481009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:47.949459  481009 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:47.949474  481009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:47.951337  481009 out.go:177] * Verifying Kubernetes components...
	I0819 19:58:47.952747  481009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:58:47.967427  481009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34079
	I0819 19:58:47.968034  481009 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:47.968635  481009 main.go:141] libmachine: Using API Version  1
	I0819 19:58:47.968661  481009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:47.969238  481009 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:47.969507  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetState
	I0819 19:58:47.970707  481009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38211
	I0819 19:58:47.971264  481009 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:47.971947  481009 main.go:141] libmachine: Using API Version  1
	I0819 19:58:47.971969  481009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:47.972397  481009 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:47.972720  481009 kapi.go:59] client config for running-upgrade-814149: &rest.Config{Host:"https://192.168.39.238:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/client.key", CAFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 19:58:47.972970  481009 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:47.972994  481009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:47.973027  481009 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-814149"
	W0819 19:58:47.973040  481009 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:58:47.973267  481009 host.go:66] Checking if "running-upgrade-814149" exists ...
	I0819 19:58:47.973634  481009 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:47.973661  481009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:47.994215  481009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38689
	I0819 19:58:47.995060  481009 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:47.995740  481009 main.go:141] libmachine: Using API Version  1
	I0819 19:58:47.995765  481009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:47.996164  481009 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:47.996753  481009 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:47.996778  481009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:48.009333  481009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37381
	I0819 19:58:48.010583  481009 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:48.011303  481009 main.go:141] libmachine: Using API Version  1
	I0819 19:58:48.011332  481009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:48.011821  481009 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:48.012078  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetState
	I0819 19:58:48.014193  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .DriverName
	I0819 19:58:48.016173  481009 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:58:43.172809  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:40:8f:2e in network mk-NoKubernetes-803941
	I0819 19:58:43.173377  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | unable to find current IP address of domain NoKubernetes-803941 in network mk-NoKubernetes-803941
	I0819 19:58:43.173423  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:43.173340  481419 retry.go:31] will retry after 340.952687ms: waiting for machine to come up
	I0819 19:58:43.516109  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:40:8f:2e in network mk-NoKubernetes-803941
	I0819 19:58:43.516721  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | unable to find current IP address of domain NoKubernetes-803941 in network mk-NoKubernetes-803941
	I0819 19:58:43.516740  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:43.516680  481419 retry.go:31] will retry after 431.043253ms: waiting for machine to come up
	I0819 19:58:43.949254  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:40:8f:2e in network mk-NoKubernetes-803941
	I0819 19:58:43.949817  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | unable to find current IP address of domain NoKubernetes-803941 in network mk-NoKubernetes-803941
	I0819 19:58:43.949836  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:43.949720  481419 retry.go:31] will retry after 467.702895ms: waiting for machine to come up
	I0819 19:58:44.419528  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:40:8f:2e in network mk-NoKubernetes-803941
	I0819 19:58:44.420236  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | unable to find current IP address of domain NoKubernetes-803941 in network mk-NoKubernetes-803941
	I0819 19:58:44.420270  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:44.420146  481419 retry.go:31] will retry after 735.974424ms: waiting for machine to come up
	I0819 19:58:45.158487  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:40:8f:2e in network mk-NoKubernetes-803941
	I0819 19:58:45.159346  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | unable to find current IP address of domain NoKubernetes-803941 in network mk-NoKubernetes-803941
	I0819 19:58:45.159367  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:45.159237  481419 retry.go:31] will retry after 939.601782ms: waiting for machine to come up
	I0819 19:58:46.101040  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:40:8f:2e in network mk-NoKubernetes-803941
	I0819 19:58:46.101620  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | unable to find current IP address of domain NoKubernetes-803941 in network mk-NoKubernetes-803941
	I0819 19:58:46.101645  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:46.101573  481419 retry.go:31] will retry after 988.707631ms: waiting for machine to come up
	I0819 19:58:47.092271  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:40:8f:2e in network mk-NoKubernetes-803941
	I0819 19:58:47.092797  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | unable to find current IP address of domain NoKubernetes-803941 in network mk-NoKubernetes-803941
	I0819 19:58:47.092817  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:47.092733  481419 retry.go:31] will retry after 1.289747968s: waiting for machine to come up
	I0819 19:58:43.466333  481208 cni.go:84] Creating CNI manager for ""
	I0819 19:58:43.466366  481208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:58:43.466378  481208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:58:43.466409  481208 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-232147 NodeName:pause-232147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:58:43.466606  481208 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-232147"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:58:43.466692  481208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:58:43.480800  481208 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:58:43.480943  481208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:58:43.494098  481208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0819 19:58:43.518868  481208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:58:43.550730  481208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0819 19:58:43.575823  481208 ssh_runner.go:195] Run: grep 192.168.50.125	control-plane.minikube.internal$ /etc/hosts
	I0819 19:58:43.582039  481208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:58:43.767732  481208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:58:43.821156  481208 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147 for IP: 192.168.50.125
	I0819 19:58:43.821187  481208 certs.go:194] generating shared ca certs ...
	I0819 19:58:43.821211  481208 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:43.821396  481208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 19:58:43.821450  481208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 19:58:43.821467  481208 certs.go:256] generating profile certs ...
	I0819 19:58:43.821620  481208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/client.key
	I0819 19:58:43.821705  481208 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/apiserver.key.bef1e027
	I0819 19:58:43.821761  481208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/proxy-client.key
	I0819 19:58:43.821912  481208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 19:58:43.821949  481208 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 19:58:43.821958  481208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 19:58:43.821988  481208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:58:43.822021  481208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:58:43.822045  481208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 19:58:43.822096  481208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:58:43.823008  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:58:44.086925  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:58:44.262859  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:58:44.445056  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 19:58:44.554734  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 19:58:44.659085  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:58:44.730751  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:58:44.769563  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 19:58:44.811174  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 19:58:44.845022  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 19:58:44.888275  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:58:44.931824  481208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:58:45.018160  481208 ssh_runner.go:195] Run: openssl version
	I0819 19:58:45.028573  481208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 19:58:45.044017  481208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 19:58:45.052240  481208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 19:58:45.052330  481208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 19:58:45.061553  481208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:58:45.076326  481208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:58:45.096827  481208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:58:45.103822  481208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:58:45.104009  481208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:58:45.112205  481208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:58:45.124865  481208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 19:58:45.140513  481208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 19:58:45.146811  481208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 19:58:45.146908  481208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 19:58:45.154991  481208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 19:58:45.174607  481208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:58:45.180731  481208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:58:45.188748  481208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:58:45.196496  481208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:58:45.204894  481208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:58:45.216659  481208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:58:45.225302  481208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:58:45.237522  481208 kubeadm.go:392] StartCluster: {Name:pause-232147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:pause-232147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:58:45.237752  481208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:58:45.237831  481208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:58:45.312153  481208 cri.go:89] found id: "ca316e974a78cefb52e09931ddf40c366eeaf95d3e88f530720554bdce23601f"
	I0819 19:58:45.312251  481208 cri.go:89] found id: "97e775e2fa7659e0e7ef22311ebe63cdb630f6e9688a1c52957d409ff88c9606"
	I0819 19:58:45.312273  481208 cri.go:89] found id: "8de8790b20b2cd7839093a8c87b5f67212ad9f970d60b7768d50e60d935439ae"
	I0819 19:58:45.312302  481208 cri.go:89] found id: "a21d2be3b0654f535a0b85beab522a910e60da1332a4ca91a69518797c176bc3"
	I0819 19:58:45.312332  481208 cri.go:89] found id: "bd77b49b539b66437d83a597edae7053b4aa97458faf8bdca5e1291db42c82a2"
	I0819 19:58:45.312347  481208 cri.go:89] found id: "87de891f8b0c8d70cd1ed1e52f9bb91ad789661b868c82d342085e78f45e8af0"
	I0819 19:58:45.312367  481208 cri.go:89] found id: "ba564d4d374b6de35552277a9f888a707e3fcc74a84da8bf6e8a43763dbe7a5c"
	I0819 19:58:45.312408  481208 cri.go:89] found id: "c362bfb09b902727dca16cc486a92f740411447ccf8a54937f1a2ce6b4861b94"
	I0819 19:58:45.312435  481208 cri.go:89] found id: ""
	I0819 19:58:45.312531  481208 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.540279837Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097549540251479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf864cc9-3701-4294-919a-baba82614804 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.540825494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6bd3cfe9-99dd-4cf8-aae4-732c02fc9d60 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.540927404Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6bd3cfe9-99dd-4cf8-aae4-732c02fc9d60 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.541349528Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:520f27ed06d74c8d72a71cb6b710931be15d57349dc84d6addf3f17e5d45e1e4,PodSandboxId:685113e39e23351b2813c2da865a0acc2ad8a24f0ccd0d49ce497732bef321a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724097532179265307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ztskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4fa0745-fdef-4780-9b98-0a777d4cec90,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd064504364d8ab84157d5097535d715c95c99f52b5cfca602136ec82be83677,PodSandboxId:6cc030921c7c40908c8e724ece7c6bd2c1e42ce6112ab3e4476dc3524cd3aa93,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724097532172244929,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvnqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eda2399-7ef3-451f-9274-b94af2f13767,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c7b8274ac3fc5960f8c620b1c947e80091b2742eb2b7755dceec1f2fdca7cb,PodSandboxId:7976e34be27ac12f9e15b0fffb9fbd20f1cab658eee55d923564382af2863505,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724097528426480711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b76749c578fb5d3076d28d72e7894da,},Annota
tions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e94efaa503d71f37142cc657a5c12d83a68eaf26b8f8a39209f14d17e7aede30,PodSandboxId:549574688b704b3fe44d6ecbc72fb4b4df5c44bc3174b2917a6a052a1e92daee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724097528436927820,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a42810500f7482545a5ee02b62bb4b8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36fe82de3b13311fd356d124ae973b46707e6e3b3adaa19a549738097deb26f,PodSandboxId:e7b297af9dc53849969bea95857c2d8b237073ba371897d2c04f0ced072cd2c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724097528472798822,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912a77441dc947e6f62b6e1b3fa8984,},Annotations:map[string]string{io.kubernete
s.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36c09ba25a9801ff169f19647ad1fbdd22aaf654ac030c1f93a4b0c9a999e453,PodSandboxId:96b1c1881ad230a3ab5e8324c15be58a489425460f4eb9d0cbd1525cce632cd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724097528399274745,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a00b303ba8304647a0e2924005627c5,},Annotations:map[string]string{io.
kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca316e974a78cefb52e09931ddf40c366eeaf95d3e88f530720554bdce23601f,PodSandboxId:96b1c1881ad230a3ab5e8324c15be58a489425460f4eb9d0cbd1525cce632cd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724097524336094571,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a00b303ba8304647a0e2924005627c5,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e775e2fa7659e0e7ef22311ebe63cdb630f6e9688a1c52957d409ff88c9606,PodSandboxId:549574688b704b3fe44d6ecbc72fb4b4df5c44bc3174b2917a6a052a1e92daee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724097524215098167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a42810500f7482545a5ee02b62bb4b8,},Annotations:map[string]string{io.kubernetes.
container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8de8790b20b2cd7839093a8c87b5f67212ad9f970d60b7768d50e60d935439ae,PodSandboxId:bc0526a6cc5b66c7d725af68cfc1142c2a1690930a8d29dcafc0a6c2e9bbf94c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724097522549800950,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvnqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eda2399-7ef3-451f-9274-b94af2f13767,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kube
rnetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21d2be3b0654f535a0b85beab522a910e60da1332a4ca91a69518797c176bc3,PodSandboxId:04773d646a4805fc307be3d4962a0d69eece9dbf938341bfb48e7dca69c8e352,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724097522440289922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912a77441dc947e6f62b6e1b3fa8984,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87de891f8b0c8d70cd1ed1e52f9bb91ad789661b868c82d342085e78f45e8af0,PodSandboxId:97a2ccc19121b3e6cc856b4bfe704b0d64118bc96b8cd2ef34b1d73b8c6a5bee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724097521943106080,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ztskd,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4fa0745-fdef-4780-9b98-0a777d4cec90,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd77b49b539b66437d83a597edae7053b4aa97458faf8bdca5e1291db42c82a2,PodSandboxId:61b76d0d2f4f618dd211ef9c20819370ed398a5375a0c008b71f6312db73890e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724097521952123562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0b76749c578fb5d3076d28d72e7894da,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6bd3cfe9-99dd-4cf8-aae4-732c02fc9d60 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.585420021Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59450d58-58c4-43df-bf8a-69ff002d7d21 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.585521176Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59450d58-58c4-43df-bf8a-69ff002d7d21 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.586767501Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3c2d4bd6-a9c7-4bd3-b0f3-8780df1d112e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.587279275Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097549587251851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c2d4bd6-a9c7-4bd3-b0f3-8780df1d112e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.587783845Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7158b6a8-5100-4584-bc54-01770f2d90e1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.587863356Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7158b6a8-5100-4584-bc54-01770f2d90e1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.588208710Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:520f27ed06d74c8d72a71cb6b710931be15d57349dc84d6addf3f17e5d45e1e4,PodSandboxId:685113e39e23351b2813c2da865a0acc2ad8a24f0ccd0d49ce497732bef321a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724097532179265307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ztskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4fa0745-fdef-4780-9b98-0a777d4cec90,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd064504364d8ab84157d5097535d715c95c99f52b5cfca602136ec82be83677,PodSandboxId:6cc030921c7c40908c8e724ece7c6bd2c1e42ce6112ab3e4476dc3524cd3aa93,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724097532172244929,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvnqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eda2399-7ef3-451f-9274-b94af2f13767,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c7b8274ac3fc5960f8c620b1c947e80091b2742eb2b7755dceec1f2fdca7cb,PodSandboxId:7976e34be27ac12f9e15b0fffb9fbd20f1cab658eee55d923564382af2863505,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724097528426480711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b76749c578fb5d3076d28d72e7894da,},Annota
tions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e94efaa503d71f37142cc657a5c12d83a68eaf26b8f8a39209f14d17e7aede30,PodSandboxId:549574688b704b3fe44d6ecbc72fb4b4df5c44bc3174b2917a6a052a1e92daee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724097528436927820,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a42810500f7482545a5ee02b62bb4b8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36fe82de3b13311fd356d124ae973b46707e6e3b3adaa19a549738097deb26f,PodSandboxId:e7b297af9dc53849969bea95857c2d8b237073ba371897d2c04f0ced072cd2c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724097528472798822,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912a77441dc947e6f62b6e1b3fa8984,},Annotations:map[string]string{io.kubernete
s.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36c09ba25a9801ff169f19647ad1fbdd22aaf654ac030c1f93a4b0c9a999e453,PodSandboxId:96b1c1881ad230a3ab5e8324c15be58a489425460f4eb9d0cbd1525cce632cd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724097528399274745,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a00b303ba8304647a0e2924005627c5,},Annotations:map[string]string{io.
kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca316e974a78cefb52e09931ddf40c366eeaf95d3e88f530720554bdce23601f,PodSandboxId:96b1c1881ad230a3ab5e8324c15be58a489425460f4eb9d0cbd1525cce632cd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724097524336094571,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a00b303ba8304647a0e2924005627c5,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e775e2fa7659e0e7ef22311ebe63cdb630f6e9688a1c52957d409ff88c9606,PodSandboxId:549574688b704b3fe44d6ecbc72fb4b4df5c44bc3174b2917a6a052a1e92daee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724097524215098167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a42810500f7482545a5ee02b62bb4b8,},Annotations:map[string]string{io.kubernetes.
container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8de8790b20b2cd7839093a8c87b5f67212ad9f970d60b7768d50e60d935439ae,PodSandboxId:bc0526a6cc5b66c7d725af68cfc1142c2a1690930a8d29dcafc0a6c2e9bbf94c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724097522549800950,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvnqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eda2399-7ef3-451f-9274-b94af2f13767,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kube
rnetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21d2be3b0654f535a0b85beab522a910e60da1332a4ca91a69518797c176bc3,PodSandboxId:04773d646a4805fc307be3d4962a0d69eece9dbf938341bfb48e7dca69c8e352,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724097522440289922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912a77441dc947e6f62b6e1b3fa8984,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87de891f8b0c8d70cd1ed1e52f9bb91ad789661b868c82d342085e78f45e8af0,PodSandboxId:97a2ccc19121b3e6cc856b4bfe704b0d64118bc96b8cd2ef34b1d73b8c6a5bee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724097521943106080,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ztskd,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4fa0745-fdef-4780-9b98-0a777d4cec90,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd77b49b539b66437d83a597edae7053b4aa97458faf8bdca5e1291db42c82a2,PodSandboxId:61b76d0d2f4f618dd211ef9c20819370ed398a5375a0c008b71f6312db73890e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724097521952123562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0b76749c578fb5d3076d28d72e7894da,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7158b6a8-5100-4584-bc54-01770f2d90e1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.631076772Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=494ba59b-0928-4b72-a441-30db4771aace name=/runtime.v1.RuntimeService/Version
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.631194223Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=494ba59b-0928-4b72-a441-30db4771aace name=/runtime.v1.RuntimeService/Version
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.633260044Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a4be0a8-0e38-4114-9d5e-2a4db6d3ef8b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.633650387Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097549633622088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a4be0a8-0e38-4114-9d5e-2a4db6d3ef8b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.634426205Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac3138a7-025d-47e4-9be2-7a25baa264c3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.634483513Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac3138a7-025d-47e4-9be2-7a25baa264c3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.634728040Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:520f27ed06d74c8d72a71cb6b710931be15d57349dc84d6addf3f17e5d45e1e4,PodSandboxId:685113e39e23351b2813c2da865a0acc2ad8a24f0ccd0d49ce497732bef321a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724097532179265307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ztskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4fa0745-fdef-4780-9b98-0a777d4cec90,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd064504364d8ab84157d5097535d715c95c99f52b5cfca602136ec82be83677,PodSandboxId:6cc030921c7c40908c8e724ece7c6bd2c1e42ce6112ab3e4476dc3524cd3aa93,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724097532172244929,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvnqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eda2399-7ef3-451f-9274-b94af2f13767,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c7b8274ac3fc5960f8c620b1c947e80091b2742eb2b7755dceec1f2fdca7cb,PodSandboxId:7976e34be27ac12f9e15b0fffb9fbd20f1cab658eee55d923564382af2863505,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724097528426480711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b76749c578fb5d3076d28d72e7894da,},Annota
tions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e94efaa503d71f37142cc657a5c12d83a68eaf26b8f8a39209f14d17e7aede30,PodSandboxId:549574688b704b3fe44d6ecbc72fb4b4df5c44bc3174b2917a6a052a1e92daee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724097528436927820,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a42810500f7482545a5ee02b62bb4b8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36fe82de3b13311fd356d124ae973b46707e6e3b3adaa19a549738097deb26f,PodSandboxId:e7b297af9dc53849969bea95857c2d8b237073ba371897d2c04f0ced072cd2c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724097528472798822,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912a77441dc947e6f62b6e1b3fa8984,},Annotations:map[string]string{io.kubernete
s.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36c09ba25a9801ff169f19647ad1fbdd22aaf654ac030c1f93a4b0c9a999e453,PodSandboxId:96b1c1881ad230a3ab5e8324c15be58a489425460f4eb9d0cbd1525cce632cd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724097528399274745,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a00b303ba8304647a0e2924005627c5,},Annotations:map[string]string{io.
kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca316e974a78cefb52e09931ddf40c366eeaf95d3e88f530720554bdce23601f,PodSandboxId:96b1c1881ad230a3ab5e8324c15be58a489425460f4eb9d0cbd1525cce632cd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724097524336094571,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a00b303ba8304647a0e2924005627c5,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e775e2fa7659e0e7ef22311ebe63cdb630f6e9688a1c52957d409ff88c9606,PodSandboxId:549574688b704b3fe44d6ecbc72fb4b4df5c44bc3174b2917a6a052a1e92daee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724097524215098167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a42810500f7482545a5ee02b62bb4b8,},Annotations:map[string]string{io.kubernetes.
container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8de8790b20b2cd7839093a8c87b5f67212ad9f970d60b7768d50e60d935439ae,PodSandboxId:bc0526a6cc5b66c7d725af68cfc1142c2a1690930a8d29dcafc0a6c2e9bbf94c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724097522549800950,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvnqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eda2399-7ef3-451f-9274-b94af2f13767,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kube
rnetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21d2be3b0654f535a0b85beab522a910e60da1332a4ca91a69518797c176bc3,PodSandboxId:04773d646a4805fc307be3d4962a0d69eece9dbf938341bfb48e7dca69c8e352,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724097522440289922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912a77441dc947e6f62b6e1b3fa8984,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87de891f8b0c8d70cd1ed1e52f9bb91ad789661b868c82d342085e78f45e8af0,PodSandboxId:97a2ccc19121b3e6cc856b4bfe704b0d64118bc96b8cd2ef34b1d73b8c6a5bee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724097521943106080,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ztskd,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4fa0745-fdef-4780-9b98-0a777d4cec90,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd77b49b539b66437d83a597edae7053b4aa97458faf8bdca5e1291db42c82a2,PodSandboxId:61b76d0d2f4f618dd211ef9c20819370ed398a5375a0c008b71f6312db73890e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724097521952123562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0b76749c578fb5d3076d28d72e7894da,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac3138a7-025d-47e4-9be2-7a25baa264c3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.676580585Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d84ee713-0906-4e0f-a60c-a4cb09bf940a name=/runtime.v1.RuntimeService/Version
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.676668047Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d84ee713-0906-4e0f-a60c-a4cb09bf940a name=/runtime.v1.RuntimeService/Version
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.677947701Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=edf35646-8d22-4dbb-a73e-58b290de1128 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.678379598Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097549678355540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=edf35646-8d22-4dbb-a73e-58b290de1128 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.678847772Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec2c5963-f23f-4c11-9921-ba0512909ba6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.678916174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec2c5963-f23f-4c11-9921-ba0512909ba6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:09 pause-232147 crio[2642]: time="2024-08-19 19:59:09.679184801Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:520f27ed06d74c8d72a71cb6b710931be15d57349dc84d6addf3f17e5d45e1e4,PodSandboxId:685113e39e23351b2813c2da865a0acc2ad8a24f0ccd0d49ce497732bef321a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724097532179265307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ztskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4fa0745-fdef-4780-9b98-0a777d4cec90,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd064504364d8ab84157d5097535d715c95c99f52b5cfca602136ec82be83677,PodSandboxId:6cc030921c7c40908c8e724ece7c6bd2c1e42ce6112ab3e4476dc3524cd3aa93,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724097532172244929,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvnqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eda2399-7ef3-451f-9274-b94af2f13767,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c7b8274ac3fc5960f8c620b1c947e80091b2742eb2b7755dceec1f2fdca7cb,PodSandboxId:7976e34be27ac12f9e15b0fffb9fbd20f1cab658eee55d923564382af2863505,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724097528426480711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b76749c578fb5d3076d28d72e7894da,},Annota
tions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e94efaa503d71f37142cc657a5c12d83a68eaf26b8f8a39209f14d17e7aede30,PodSandboxId:549574688b704b3fe44d6ecbc72fb4b4df5c44bc3174b2917a6a052a1e92daee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724097528436927820,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a42810500f7482545a5ee02b62bb4b8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36fe82de3b13311fd356d124ae973b46707e6e3b3adaa19a549738097deb26f,PodSandboxId:e7b297af9dc53849969bea95857c2d8b237073ba371897d2c04f0ced072cd2c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724097528472798822,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912a77441dc947e6f62b6e1b3fa8984,},Annotations:map[string]string{io.kubernete
s.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36c09ba25a9801ff169f19647ad1fbdd22aaf654ac030c1f93a4b0c9a999e453,PodSandboxId:96b1c1881ad230a3ab5e8324c15be58a489425460f4eb9d0cbd1525cce632cd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724097528399274745,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a00b303ba8304647a0e2924005627c5,},Annotations:map[string]string{io.
kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca316e974a78cefb52e09931ddf40c366eeaf95d3e88f530720554bdce23601f,PodSandboxId:96b1c1881ad230a3ab5e8324c15be58a489425460f4eb9d0cbd1525cce632cd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724097524336094571,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a00b303ba8304647a0e2924005627c5,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e775e2fa7659e0e7ef22311ebe63cdb630f6e9688a1c52957d409ff88c9606,PodSandboxId:549574688b704b3fe44d6ecbc72fb4b4df5c44bc3174b2917a6a052a1e92daee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724097524215098167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a42810500f7482545a5ee02b62bb4b8,},Annotations:map[string]string{io.kubernetes.
container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8de8790b20b2cd7839093a8c87b5f67212ad9f970d60b7768d50e60d935439ae,PodSandboxId:bc0526a6cc5b66c7d725af68cfc1142c2a1690930a8d29dcafc0a6c2e9bbf94c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724097522549800950,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvnqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eda2399-7ef3-451f-9274-b94af2f13767,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kube
rnetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21d2be3b0654f535a0b85beab522a910e60da1332a4ca91a69518797c176bc3,PodSandboxId:04773d646a4805fc307be3d4962a0d69eece9dbf938341bfb48e7dca69c8e352,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724097522440289922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912a77441dc947e6f62b6e1b3fa8984,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87de891f8b0c8d70cd1ed1e52f9bb91ad789661b868c82d342085e78f45e8af0,PodSandboxId:97a2ccc19121b3e6cc856b4bfe704b0d64118bc96b8cd2ef34b1d73b8c6a5bee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724097521943106080,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ztskd,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4fa0745-fdef-4780-9b98-0a777d4cec90,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd77b49b539b66437d83a597edae7053b4aa97458faf8bdca5e1291db42c82a2,PodSandboxId:61b76d0d2f4f618dd211ef9c20819370ed398a5375a0c008b71f6312db73890e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724097521952123562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0b76749c578fb5d3076d28d72e7894da,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec2c5963-f23f-4c11-9921-ba0512909ba6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	520f27ed06d74       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   17 seconds ago      Running             kube-proxy                2                   685113e39e233       kube-proxy-ztskd
	fd064504364d8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 seconds ago      Running             coredns                   2                   6cc030921c7c4       coredns-6f6b679f8f-gvnqf
	d36fe82de3b13       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   21 seconds ago      Running             kube-apiserver            2                   e7b297af9dc53       kube-apiserver-pause-232147
	e94efaa503d71       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   21 seconds ago      Running             kube-scheduler            2                   549574688b704       kube-scheduler-pause-232147
	a5c7b8274ac3f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   21 seconds ago      Running             etcd                      2                   7976e34be27ac       etcd-pause-232147
	36c09ba25a980       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   21 seconds ago      Running             kube-controller-manager   2                   96b1c1881ad23       kube-controller-manager-pause-232147
	ca316e974a78c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   25 seconds ago      Exited              kube-controller-manager   1                   96b1c1881ad23       kube-controller-manager-pause-232147
	97e775e2fa765       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   25 seconds ago      Exited              kube-scheduler            1                   549574688b704       kube-scheduler-pause-232147
	8de8790b20b2c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   27 seconds ago      Exited              coredns                   1                   bc0526a6cc5b6       coredns-6f6b679f8f-gvnqf
	a21d2be3b0654       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   27 seconds ago      Exited              kube-apiserver            1                   04773d646a480       kube-apiserver-pause-232147
	bd77b49b539b6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   27 seconds ago      Exited              etcd                      1                   61b76d0d2f4f6       etcd-pause-232147
	87de891f8b0c8       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   27 seconds ago      Exited              kube-proxy                1                   97a2ccc19121b       kube-proxy-ztskd
	
	
	==> coredns [8de8790b20b2cd7839093a8c87b5f67212ad9f970d60b7768d50e60d935439ae] <==
	
	
	==> coredns [fd064504364d8ab84157d5097535d715c95c99f52b5cfca602136ec82be83677] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44577 - 51715 "HINFO IN 572853228764530317.4687144521485707226. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021885695s
	
	
	==> describe nodes <==
	Name:               pause-232147
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-232147
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=pause-232147
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T19_58_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:58:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-232147
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:59:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:58:51 +0000   Mon, 19 Aug 2024 19:58:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:58:51 +0000   Mon, 19 Aug 2024 19:58:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:58:51 +0000   Mon, 19 Aug 2024 19:58:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:58:51 +0000   Mon, 19 Aug 2024 19:58:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.125
	  Hostname:    pause-232147
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 cec13e5b52894b7da1ee2640bfe5479a
	  System UUID:                cec13e5b-5289-4b7d-a1ee-2640bfe5479a
	  Boot ID:                    6d58c741-491b-4fca-9459-a14744ac1965
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-gvnqf                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     54s
	  kube-system                 etcd-pause-232147                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         59s
	  kube-system                 kube-apiserver-pause-232147             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-pause-232147    200m (10%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-ztskd                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-pause-232147             100m (5%)     0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17s                kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeAllocatableEnforced  66s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     65s (x7 over 66s)  kubelet          Node pause-232147 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  65s (x8 over 66s)  kubelet          Node pause-232147 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 66s)  kubelet          Node pause-232147 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s                kubelet          Node pause-232147 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  59s                kubelet          Node pause-232147 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s                kubelet          Node pause-232147 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeReady                58s                kubelet          Node pause-232147 status is now: NodeReady
	  Normal  RegisteredNode           55s                node-controller  Node pause-232147 event: Registered Node pause-232147 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21s (x8 over 22s)  kubelet          Node pause-232147 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 22s)  kubelet          Node pause-232147 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 22s)  kubelet          Node pause-232147 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15s                node-controller  Node pause-232147 event: Registered Node pause-232147 in Controller
	
	
	==> dmesg <==
	[  +8.633352] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.060437] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063277] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.180529] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.146161] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.280481] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.365060] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.065319] kauditd_printk_skb: 130 callbacks suppressed
	[Aug19 19:58] systemd-fstab-generator[898]: Ignoring "noauto" option for root device
	[  +0.936858] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.641222] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[  +0.080958] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.328701] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +0.100226] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.264780] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.114436] systemd-fstab-generator[2031]: Ignoring "noauto" option for root device
	[  +0.220683] systemd-fstab-generator[2044]: Ignoring "noauto" option for root device
	[  +0.229901] systemd-fstab-generator[2058]: Ignoring "noauto" option for root device
	[  +0.211063] systemd-fstab-generator[2087]: Ignoring "noauto" option for root device
	[  +0.758930] systemd-fstab-generator[2421]: Ignoring "noauto" option for root device
	[  +1.201288] systemd-fstab-generator[2762]: Ignoring "noauto" option for root device
	[  +4.005541] systemd-fstab-generator[3303]: Ignoring "noauto" option for root device
	[  +0.081224] kauditd_printk_skb: 243 callbacks suppressed
	[  +7.604898] kauditd_printk_skb: 53 callbacks suppressed
	[Aug19 19:59] systemd-fstab-generator[3773]: Ignoring "noauto" option for root device
	
	
	==> etcd [a5c7b8274ac3fc5960f8c620b1c947e80091b2742eb2b7755dceec1f2fdca7cb] <==
	{"level":"info","ts":"2024-08-19T19:58:49.140677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 switched to configuration voters=(10250663014225178659)"}
	{"level":"info","ts":"2024-08-19T19:58:49.140734Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"40e9c4986db8cbc5","local-member-id":"8e41abb37b207023","added-peer-id":"8e41abb37b207023","added-peer-peer-urls":["https://192.168.50.125:2380"]}
	{"level":"info","ts":"2024-08-19T19:58:49.140844Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"40e9c4986db8cbc5","local-member-id":"8e41abb37b207023","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:58:49.140883Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:58:49.156436Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T19:58:49.156673Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"8e41abb37b207023","initial-advertise-peer-urls":["https://192.168.50.125:2380"],"listen-peer-urls":["https://192.168.50.125:2380"],"advertise-client-urls":["https://192.168.50.125:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.125:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T19:58:49.156712Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T19:58:49.156802Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.125:2380"}
	{"level":"info","ts":"2024-08-19T19:58:49.156823Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.125:2380"}
	{"level":"info","ts":"2024-08-19T19:58:50.298489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T19:58:50.298567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T19:58:50.298593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 received MsgPreVoteResp from 8e41abb37b207023 at term 2"}
	{"level":"info","ts":"2024-08-19T19:58:50.298607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T19:58:50.298614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 received MsgVoteResp from 8e41abb37b207023 at term 3"}
	{"level":"info","ts":"2024-08-19T19:58:50.298623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 became leader at term 3"}
	{"level":"info","ts":"2024-08-19T19:58:50.298630Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8e41abb37b207023 elected leader 8e41abb37b207023 at term 3"}
	{"level":"info","ts":"2024-08-19T19:58:50.305092Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8e41abb37b207023","local-member-attributes":"{Name:pause-232147 ClientURLs:[https://192.168.50.125:2379]}","request-path":"/0/members/8e41abb37b207023/attributes","cluster-id":"40e9c4986db8cbc5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T19:58:50.305110Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:58:50.305359Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:58:50.305767Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T19:58:50.305814Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T19:58:50.306472Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:58:50.306623Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:58:50.307402Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.125:2379"}
	{"level":"info","ts":"2024-08-19T19:58:50.307526Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [bd77b49b539b66437d83a597edae7053b4aa97458faf8bdca5e1291db42c82a2] <==
	
	
	==> kernel <==
	 19:59:10 up 1 min,  0 users,  load average: 1.47, 0.48, 0.17
	Linux pause-232147 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a21d2be3b0654f535a0b85beab522a910e60da1332a4ca91a69518797c176bc3] <==
	
	
	==> kube-apiserver [d36fe82de3b13311fd356d124ae973b46707e6e3b3adaa19a549738097deb26f] <==
	I0819 19:58:51.648498       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 19:58:51.648544       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 19:58:51.656881       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 19:58:51.657062       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 19:58:51.657082       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 19:58:51.657154       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 19:58:51.657182       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 19:58:51.657331       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 19:58:51.658168       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 19:58:51.658947       1 aggregator.go:171] initial CRD sync complete...
	I0819 19:58:51.658978       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 19:58:51.659009       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 19:58:51.659015       1 cache.go:39] Caches are synced for autoregister controller
	I0819 19:58:51.698745       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 19:58:51.713161       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 19:58:51.713254       1 policy_source.go:224] refreshing policies
	I0819 19:58:51.756119       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 19:58:52.556246       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 19:58:53.086677       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 19:58:53.111132       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 19:58:53.164861       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 19:58:53.213887       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 19:58:53.225141       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 19:58:55.314341       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 19:58:55.363710       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [36c09ba25a9801ff169f19647ad1fbdd22aaf654ac030c1f93a4b0c9a999e453] <==
	I0819 19:58:54.963617       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0819 19:58:54.963635       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0819 19:58:54.963643       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0819 19:58:54.963708       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-232147"
	I0819 19:58:54.963547       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0819 19:58:54.963822       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-232147"
	I0819 19:58:54.963864       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0819 19:58:54.964736       1 shared_informer.go:320] Caches are synced for ephemeral
	I0819 19:58:54.970487       1 shared_informer.go:320] Caches are synced for service account
	I0819 19:58:54.974351       1 shared_informer.go:320] Caches are synced for persistent volume
	I0819 19:58:55.026810       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="116.595727ms"
	I0819 19:58:55.027078       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="103.13µs"
	I0819 19:58:55.061004       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0819 19:58:55.068789       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0819 19:58:55.105632       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0819 19:58:55.116616       1 shared_informer.go:320] Caches are synced for endpoint
	I0819 19:58:55.137711       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 19:58:55.165306       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 19:58:55.210212       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0819 19:58:55.211825       1 shared_informer.go:320] Caches are synced for disruption
	I0819 19:58:55.589496       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 19:58:55.651784       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 19:58:55.651893       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0819 19:58:59.474816       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="19.167031ms"
	I0819 19:58:59.474909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="50.943µs"
	
	
	==> kube-controller-manager [ca316e974a78cefb52e09931ddf40c366eeaf95d3e88f530720554bdce23601f] <==
	I0819 19:58:45.346924       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-proxy [520f27ed06d74c8d72a71cb6b710931be15d57349dc84d6addf3f17e5d45e1e4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 19:58:52.385361       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 19:58:52.394751       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.125"]
	E0819 19:58:52.394933       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 19:58:52.429522       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 19:58:52.429622       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 19:58:52.429675       1 server_linux.go:169] "Using iptables Proxier"
	I0819 19:58:52.432196       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 19:58:52.432522       1 server.go:483] "Version info" version="v1.31.0"
	I0819 19:58:52.432572       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:58:52.433622       1 config.go:197] "Starting service config controller"
	I0819 19:58:52.433686       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 19:58:52.433721       1 config.go:104] "Starting endpoint slice config controller"
	I0819 19:58:52.433738       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 19:58:52.434268       1 config.go:326] "Starting node config controller"
	I0819 19:58:52.434339       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 19:58:52.533810       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 19:58:52.533881       1 shared_informer.go:320] Caches are synced for service config
	I0819 19:58:52.534429       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [87de891f8b0c8d70cd1ed1e52f9bb91ad789661b868c82d342085e78f45e8af0] <==
	
	
	==> kube-scheduler [97e775e2fa7659e0e7ef22311ebe63cdb630f6e9688a1c52957d409ff88c9606] <==
	I0819 19:58:45.693769       1 serving.go:386] Generated self-signed cert in-memory
	W0819 19:58:46.212587       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.168.50.125:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.50.125:8443: connect: connection refused
	W0819 19:58:46.212680       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 19:58:46.212705       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 19:58:46.219639       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 19:58:46.219727       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:58:46.221751       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0819 19:58:46.221864       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E0819 19:58:46.221970       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e94efaa503d71f37142cc657a5c12d83a68eaf26b8f8a39209f14d17e7aede30] <==
	I0819 19:58:49.747005       1 serving.go:386] Generated self-signed cert in-memory
	I0819 19:58:51.678200       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 19:58:51.678357       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:58:51.685371       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0819 19:58:51.685501       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0819 19:58:51.685605       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 19:58:51.685657       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 19:58:51.685691       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0819 19:58:51.685743       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0819 19:58:51.686607       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 19:58:51.686742       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 19:58:51.786592       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0819 19:58:51.786668       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0819 19:58:51.786758       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 19:58:48 pause-232147 kubelet[3310]: E0819 19:58:48.267625    3310 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.125:8443: connect: connection refused" node="pause-232147"
	Aug 19 19:58:48 pause-232147 kubelet[3310]: I0819 19:58:48.371615    3310 scope.go:117] "RemoveContainer" containerID="ca316e974a78cefb52e09931ddf40c366eeaf95d3e88f530720554bdce23601f"
	Aug 19 19:58:48 pause-232147 kubelet[3310]: I0819 19:58:48.372463    3310 scope.go:117] "RemoveContainer" containerID="97e775e2fa7659e0e7ef22311ebe63cdb630f6e9688a1c52957d409ff88c9606"
	Aug 19 19:58:48 pause-232147 kubelet[3310]: I0819 19:58:48.373063    3310 scope.go:117] "RemoveContainer" containerID="bd77b49b539b66437d83a597edae7053b4aa97458faf8bdca5e1291db42c82a2"
	Aug 19 19:58:48 pause-232147 kubelet[3310]: I0819 19:58:48.373270    3310 scope.go:117] "RemoveContainer" containerID="a21d2be3b0654f535a0b85beab522a910e60da1332a4ca91a69518797c176bc3"
	Aug 19 19:58:48 pause-232147 kubelet[3310]: E0819 19:58:48.489973    3310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-232147?timeout=10s\": dial tcp 192.168.50.125:8443: connect: connection refused" interval="800ms"
	Aug 19 19:58:48 pause-232147 kubelet[3310]: I0819 19:58:48.670395    3310 kubelet_node_status.go:72] "Attempting to register node" node="pause-232147"
	Aug 19 19:58:48 pause-232147 kubelet[3310]: E0819 19:58:48.671231    3310 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.125:8443: connect: connection refused" node="pause-232147"
	Aug 19 19:58:49 pause-232147 kubelet[3310]: I0819 19:58:49.472944    3310 kubelet_node_status.go:72] "Attempting to register node" node="pause-232147"
	Aug 19 19:58:51 pause-232147 kubelet[3310]: E0819 19:58:51.790377    3310 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-232147\" already exists" pod="kube-system/kube-controller-manager-pause-232147"
	Aug 19 19:58:51 pause-232147 kubelet[3310]: I0819 19:58:51.800911    3310 kubelet_node_status.go:111] "Node was previously registered" node="pause-232147"
	Aug 19 19:58:51 pause-232147 kubelet[3310]: I0819 19:58:51.801199    3310 kubelet_node_status.go:75] "Successfully registered node" node="pause-232147"
	Aug 19 19:58:51 pause-232147 kubelet[3310]: I0819 19:58:51.801293    3310 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 19 19:58:51 pause-232147 kubelet[3310]: I0819 19:58:51.802330    3310 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 19 19:58:51 pause-232147 kubelet[3310]: I0819 19:58:51.849875    3310 apiserver.go:52] "Watching apiserver"
	Aug 19 19:58:51 pause-232147 kubelet[3310]: I0819 19:58:51.880002    3310 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 19 19:58:51 pause-232147 kubelet[3310]: I0819 19:58:51.935109    3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4fa0745-fdef-4780-9b98-0a777d4cec90-xtables-lock\") pod \"kube-proxy-ztskd\" (UID: \"c4fa0745-fdef-4780-9b98-0a777d4cec90\") " pod="kube-system/kube-proxy-ztskd"
	Aug 19 19:58:51 pause-232147 kubelet[3310]: I0819 19:58:51.935259    3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4fa0745-fdef-4780-9b98-0a777d4cec90-lib-modules\") pod \"kube-proxy-ztskd\" (UID: \"c4fa0745-fdef-4780-9b98-0a777d4cec90\") " pod="kube-system/kube-proxy-ztskd"
	Aug 19 19:58:52 pause-232147 kubelet[3310]: I0819 19:58:52.153751    3310 scope.go:117] "RemoveContainer" containerID="87de891f8b0c8d70cd1ed1e52f9bb91ad789661b868c82d342085e78f45e8af0"
	Aug 19 19:58:52 pause-232147 kubelet[3310]: I0819 19:58:52.154257    3310 scope.go:117] "RemoveContainer" containerID="8de8790b20b2cd7839093a8c87b5f67212ad9f970d60b7768d50e60d935439ae"
	Aug 19 19:58:57 pause-232147 kubelet[3310]: E0819 19:58:57.963976    3310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097537963552255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:58:57 pause-232147 kubelet[3310]: E0819 19:58:57.964060    3310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097537963552255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:58:59 pause-232147 kubelet[3310]: I0819 19:58:59.439098    3310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 19 19:59:07 pause-232147 kubelet[3310]: E0819 19:59:07.965929    3310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097547965252728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:59:07 pause-232147 kubelet[3310]: E0819 19:59:07.965965    3310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097547965252728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:59:09.214990  481872 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-430949/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-232147 -n pause-232147
helpers_test.go:261: (dbg) Run:  kubectl --context pause-232147 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-232147 -n pause-232147
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-232147 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-232147 logs -n 25: (1.320154692s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-072157 sudo                                | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo cat                            | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo docker                         | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo                                | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo                                | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo cat                            | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo                                | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo                                | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo                                | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo cat                            | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo cat                            | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo                                | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo                                | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo                                | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo find                           | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p cilium-072157 sudo crio                           | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p cilium-072157                                     | cilium-072157          | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC | 19 Aug 24 19:57 UTC |
	| start   | -p pause-232147 --memory=2048                        | pause-232147           | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC | 19 Aug 24 19:58 UTC |
	|         | --install-addons=false                               |                        |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| start   | -p cert-expiration-228973                            | cert-expiration-228973 | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC | 19 Aug 24 19:58 UTC |
	|         | --memory=2048                                        |                        |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                        |         |         |                     |                     |
	|         | --driver=kvm2                                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| start   | -p NoKubernetes-803941                               | NoKubernetes-803941    | jenkins | v1.33.1 | 19 Aug 24 19:57 UTC | 19 Aug 24 19:58 UTC |
	|         | --no-kubernetes --driver=kvm2                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| start   | -p running-upgrade-814149                            | running-upgrade-814149 | jenkins | v1.33.1 | 19 Aug 24 19:58 UTC |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                    |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| start   | -p pause-232147                                      | pause-232147           | jenkins | v1.33.1 | 19 Aug 24 19:58 UTC | 19 Aug 24 19:59 UTC |
	|         | --alsologtostderr                                    |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| delete  | -p NoKubernetes-803941                               | NoKubernetes-803941    | jenkins | v1.33.1 | 19 Aug 24 19:58 UTC | 19 Aug 24 19:58 UTC |
	| start   | -p NoKubernetes-803941                               | NoKubernetes-803941    | jenkins | v1.33.1 | 19 Aug 24 19:58 UTC | 19 Aug 24 19:59 UTC |
	|         | --no-kubernetes --driver=kvm2                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| ssh     | -p NoKubernetes-803941 sudo                          | NoKubernetes-803941    | jenkins | v1.33.1 | 19 Aug 24 19:59 UTC |                     |
	|         | systemctl is-active --quiet                          |                        |         |         |                     |                     |
	|         | service kubelet                                      |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:58:33
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:58:33.071169  481365 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:58:33.071262  481365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:58:33.071265  481365 out.go:358] Setting ErrFile to fd 2...
	I0819 19:58:33.071268  481365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:58:33.071469  481365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:58:33.072056  481365 out.go:352] Setting JSON to false
	I0819 19:58:33.073111  481365 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13264,"bootTime":1724084249,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:58:33.073194  481365 start.go:139] virtualization: kvm guest
	I0819 19:58:33.076478  481365 out.go:177] * [NoKubernetes-803941] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:58:33.077722  481365 notify.go:220] Checking for updates...
	I0819 19:58:33.077742  481365 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 19:58:33.079016  481365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:58:33.080282  481365 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:58:33.081707  481365 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:58:33.083105  481365 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:58:33.084250  481365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:58:33.085985  481365 config.go:182] Loaded profile config "cert-expiration-228973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:58:33.086180  481365 config.go:182] Loaded profile config "pause-232147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:58:33.086344  481365 config.go:182] Loaded profile config "running-upgrade-814149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0819 19:58:33.086370  481365 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0819 19:58:33.086475  481365 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 19:58:33.126767  481365 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 19:58:33.128018  481365 start.go:297] selected driver: kvm2
	I0819 19:58:33.128036  481365 start.go:901] validating driver "kvm2" against <nil>
	I0819 19:58:33.128051  481365 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:58:33.128491  481365 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0819 19:58:33.128569  481365 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:58:33.128660  481365 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:58:33.146060  481365 install.go:137] /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:58:33.146119  481365 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 19:58:33.146663  481365 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0819 19:58:33.146867  481365 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 19:58:33.146918  481365 cni.go:84] Creating CNI manager for ""
	I0819 19:58:33.146928  481365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:58:33.146934  481365 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 19:58:33.146946  481365 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0819 19:58:33.146990  481365 start.go:340] cluster config:
	{Name:NoKubernetes-803941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-803941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:58:33.147091  481365 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:58:33.148818  481365 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-803941
	I0819 19:58:31.388963  481208 machine.go:93] provisionDockerMachine start ...
	I0819 19:58:31.388997  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:31.389297  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:31.392309  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.392752  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:31.392784  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.392909  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:31.393153  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:31.393333  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:31.393480  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:31.393664  481208 main.go:141] libmachine: Using SSH client type: native
	I0819 19:58:31.393860  481208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0819 19:58:31.393871  481208 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:58:31.519088  481208 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-232147
	
	I0819 19:58:31.519130  481208 main.go:141] libmachine: (pause-232147) Calling .GetMachineName
	I0819 19:58:31.519424  481208 buildroot.go:166] provisioning hostname "pause-232147"
	I0819 19:58:31.519456  481208 main.go:141] libmachine: (pause-232147) Calling .GetMachineName
	I0819 19:58:31.519708  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:31.524012  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.524468  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:31.524512  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.524877  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:31.525160  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:31.525378  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:31.525600  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:31.525797  481208 main.go:141] libmachine: Using SSH client type: native
	I0819 19:58:31.526030  481208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0819 19:58:31.526050  481208 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-232147 && echo "pause-232147" | sudo tee /etc/hostname
	I0819 19:58:31.684523  481208 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-232147
	
	I0819 19:58:31.684572  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:31.689946  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.690907  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:31.690949  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.693424  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:31.693677  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:31.693860  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:31.694050  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:31.694292  481208 main.go:141] libmachine: Using SSH client type: native
	I0819 19:58:31.694555  481208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0819 19:58:31.694581  481208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-232147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-232147/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-232147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:58:31.820537  481208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:58:31.820572  481208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-430949/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-430949/.minikube}
	I0819 19:58:31.820616  481208 buildroot.go:174] setting up certificates
	I0819 19:58:31.820631  481208 provision.go:84] configureAuth start
	I0819 19:58:31.820645  481208 main.go:141] libmachine: (pause-232147) Calling .GetMachineName
	I0819 19:58:31.821054  481208 main.go:141] libmachine: (pause-232147) Calling .GetIP
	I0819 19:58:31.824252  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.824872  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:31.824899  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.824952  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:31.828009  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.828405  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:31.828430  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:31.828755  481208 provision.go:143] copyHostCerts
	I0819 19:58:31.828816  481208 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 19:58:31.828837  481208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:58:31.828913  481208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 19:58:31.829048  481208 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 19:58:31.829059  481208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:58:31.829089  481208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 19:58:31.829219  481208 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 19:58:31.829233  481208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:58:31.829267  481208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 19:58:31.829338  481208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.pause-232147 san=[127.0.0.1 192.168.50.125 localhost minikube pause-232147]
	I0819 19:58:32.050961  481208 provision.go:177] copyRemoteCerts
	I0819 19:58:32.051050  481208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:58:32.051084  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:32.062514  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:32.206080  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:32.206125  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:32.206621  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:32.206873  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:32.207097  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:32.207306  481208 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/pause-232147/id_rsa Username:docker}
	I0819 19:58:32.300698  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:58:32.331302  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0819 19:58:32.366804  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:58:32.396394  481208 provision.go:87] duration metric: took 575.744609ms to configureAuth
	I0819 19:58:32.396520  481208 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:58:32.396872  481208 config.go:182] Loaded profile config "pause-232147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:58:32.396984  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:32.800274  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:32.800754  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:32.800817  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:32.801001  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:32.801269  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:32.801444  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:32.801589  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:32.801804  481208 main.go:141] libmachine: Using SSH client type: native
	I0819 19:58:32.802033  481208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0819 19:58:32.802058  481208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:58:29.839179  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:29.839213  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:29.839373  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHHostname
	I0819 19:58:29.842006  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:29.842502  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:29.842588  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:29.842655  481009 provision.go:143] copyHostCerts
	I0819 19:58:29.842725  481009 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem, removing ...
	I0819 19:58:29.842748  481009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem
	I0819 19:58:29.842815  481009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/ca.pem (1082 bytes)
	I0819 19:58:29.842938  481009 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem, removing ...
	I0819 19:58:29.842950  481009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem
	I0819 19:58:29.842982  481009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/cert.pem (1123 bytes)
	I0819 19:58:29.843059  481009 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem, removing ...
	I0819 19:58:29.843069  481009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem
	I0819 19:58:29.843096  481009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-430949/.minikube/key.pem (1675 bytes)
	I0819 19:58:29.843163  481009 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-814149 san=[127.0.0.1 192.168.39.238 localhost minikube running-upgrade-814149]
	I0819 19:58:30.035337  481009 provision.go:177] copyRemoteCerts
	I0819 19:58:30.035422  481009 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:58:30.035468  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHHostname
	I0819 19:58:30.038988  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:30.039800  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:30.039836  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:30.039857  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHPort
	I0819 19:58:30.040104  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHKeyPath
	I0819 19:58:30.040295  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHUsername
	I0819 19:58:30.040490  481009 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/running-upgrade-814149/id_rsa Username:docker}
	I0819 19:58:30.185597  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:58:30.278587  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 19:58:30.312547  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 19:58:30.385191  481009 provision.go:87] duration metric: took 549.928883ms to configureAuth
	I0819 19:58:30.385225  481009 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:58:30.385458  481009 config.go:182] Loaded profile config "running-upgrade-814149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0819 19:58:30.385557  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHHostname
	I0819 19:58:30.388596  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:30.389062  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:30.389105  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:30.389308  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHPort
	I0819 19:58:30.389572  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHKeyPath
	I0819 19:58:30.389902  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHKeyPath
	I0819 19:58:30.390113  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHUsername
	I0819 19:58:30.390315  481009 main.go:141] libmachine: Using SSH client type: native
	I0819 19:58:30.390527  481009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 19:58:30.390548  481009 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:58:31.083722  481009 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:58:31.083755  481009 machine.go:96] duration metric: took 1.895408222s to provisionDockerMachine
	I0819 19:58:31.083772  481009 start.go:293] postStartSetup for "running-upgrade-814149" (driver="kvm2")
	I0819 19:58:31.083786  481009 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:58:31.083834  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .DriverName
	I0819 19:58:31.084190  481009 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:58:31.084221  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHHostname
	I0819 19:58:31.087008  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.087479  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:31.087511  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.087705  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHPort
	I0819 19:58:31.087925  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHKeyPath
	I0819 19:58:31.088085  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHUsername
	I0819 19:58:31.088272  481009 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/running-upgrade-814149/id_rsa Username:docker}
	I0819 19:58:31.181029  481009 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:58:31.185362  481009 info.go:137] Remote host: Buildroot 2021.02.12
	I0819 19:58:31.185398  481009 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 19:58:31.185484  481009 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 19:58:31.185586  481009 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 19:58:31.185706  481009 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:58:31.194293  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:58:31.218716  481009 start.go:296] duration metric: took 134.925617ms for postStartSetup
	I0819 19:58:31.218769  481009 fix.go:56] duration metric: took 2.059278178s for fixHost
	I0819 19:58:31.218798  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHHostname
	I0819 19:58:31.222041  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.222476  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:31.222508  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.222734  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHPort
	I0819 19:58:31.222980  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHKeyPath
	I0819 19:58:31.223165  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHKeyPath
	I0819 19:58:31.223379  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHUsername
	I0819 19:58:31.223607  481009 main.go:141] libmachine: Using SSH client type: native
	I0819 19:58:31.223834  481009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 19:58:31.223854  481009 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:58:31.358639  481009 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724097511.352223012
	
	I0819 19:58:31.358668  481009 fix.go:216] guest clock: 1724097511.352223012
	I0819 19:58:31.358679  481009 fix.go:229] Guest: 2024-08-19 19:58:31.352223012 +0000 UTC Remote: 2024-08-19 19:58:31.218774838 +0000 UTC m=+21.422283084 (delta=133.448174ms)
	I0819 19:58:31.358706  481009 fix.go:200] guest clock delta is within tolerance: 133.448174ms
	I0819 19:58:31.358713  481009 start.go:83] releasing machines lock for "running-upgrade-814149", held for 2.199259317s
	I0819 19:58:31.358744  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .DriverName
	I0819 19:58:31.359065  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetIP
	I0819 19:58:31.362320  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.362720  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:31.362754  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.362922  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .DriverName
	I0819 19:58:31.363559  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .DriverName
	I0819 19:58:31.363825  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .DriverName
	I0819 19:58:31.363997  481009 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:58:31.364069  481009 ssh_runner.go:195] Run: cat /version.json
	I0819 19:58:31.364099  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHHostname
	I0819 19:58:31.364129  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHHostname
	I0819 19:58:31.367135  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.367356  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.367581  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:31.367604  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.367750  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:31.367821  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:31.368118  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHPort
	I0819 19:58:31.368151  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHPort
	I0819 19:58:31.368349  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHKeyPath
	I0819 19:58:31.368495  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHKeyPath
	I0819 19:58:31.368524  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHUsername
	I0819 19:58:31.368698  481009 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/running-upgrade-814149/id_rsa Username:docker}
	I0819 19:58:31.368714  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetSSHUsername
	I0819 19:58:31.369060  481009 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/running-upgrade-814149/id_rsa Username:docker}
	W0819 19:58:31.491537  481009 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0819 19:58:31.491637  481009 ssh_runner.go:195] Run: systemctl --version
	I0819 19:58:31.497813  481009 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:58:31.655864  481009 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:58:31.663634  481009 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:58:31.663722  481009 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:58:31.689360  481009 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:58:31.689387  481009 start.go:495] detecting cgroup driver to use...
	I0819 19:58:31.689462  481009 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:58:31.711688  481009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:58:31.725396  481009 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:58:31.725459  481009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:58:31.740753  481009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:58:31.762925  481009 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:58:32.034938  481009 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:58:32.285880  481009 docker.go:233] disabling docker service ...
	I0819 19:58:32.285952  481009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:58:32.324663  481009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:58:32.351309  481009 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:58:32.619261  481009 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:58:32.827081  481009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:58:32.843478  481009 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:58:32.864246  481009 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0819 19:58:32.864308  481009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:32.874899  481009 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:58:32.874998  481009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:32.885595  481009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:32.895009  481009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:32.904900  481009 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:58:32.914721  481009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:32.931214  481009 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:32.962451  481009 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:32.974570  481009 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:58:32.985846  481009 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:58:32.996448  481009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:58:33.186818  481009 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:58:33.723342  481009 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:58:33.723433  481009 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:58:33.728541  481009 start.go:563] Will wait 60s for crictl version
	I0819 19:58:33.728615  481009 ssh_runner.go:195] Run: which crictl
	I0819 19:58:33.732579  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:58:33.761608  481009 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.22.3
	RuntimeApiVersion:  v1alpha2
	I0819 19:58:33.761704  481009 ssh_runner.go:195] Run: crio --version
	I0819 19:58:33.797721  481009 ssh_runner.go:195] Run: crio --version
	I0819 19:58:33.849409  481009 out.go:177] * Preparing Kubernetes v1.24.1 on CRI-O 1.22.3 ...
	I0819 19:58:33.850534  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetIP
	I0819 19:58:33.853502  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:33.854097  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:a9:2c", ip: ""} in network mk-running-upgrade-814149: {Iface:virbr1 ExpiryTime:2024-08-19 20:57:20 +0000 UTC Type:0 Mac:52:54:00:c3:a9:2c Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:running-upgrade-814149 Clientid:01:52:54:00:c3:a9:2c}
	I0819 19:58:33.854127  481009 main.go:141] libmachine: (running-upgrade-814149) DBG | domain running-upgrade-814149 has defined IP address 192.168.39.238 and MAC address 52:54:00:c3:a9:2c in network mk-running-upgrade-814149
	I0819 19:58:33.854409  481009 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 19:58:33.858591  481009 kubeadm.go:883] updating cluster {Name:running-upgrade-814149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:runn
ing-upgrade-814149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0819 19:58:33.858708  481009 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0819 19:58:33.858769  481009 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:58:33.899847  481009 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0819 19:58:33.899937  481009 ssh_runner.go:195] Run: which lz4
	I0819 19:58:33.903531  481009 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:58:33.907185  481009 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:58:33.907227  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (496813465 bytes)
	I0819 19:58:33.150123  481365 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0819 19:58:33.244336  481365 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0819 19:58:33.244530  481365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/NoKubernetes-803941/config.json ...
	I0819 19:58:33.244581  481365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/NoKubernetes-803941/config.json: {Name:mkb98ef1899eab6381ae643e270a19ddb3eb8009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:33.244788  481365 start.go:360] acquireMachinesLock for NoKubernetes-803941: {Name:mkca0fc91a0e4d2975d57f907b72beaa2c97e931 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:58:35.749247  481009 crio.go:462] duration metric: took 1.84575063s to copy over tarball
	I0819 19:58:35.749348  481009 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:58:39.681823  481009 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.932423821s)
	I0819 19:58:39.681866  481009 crio.go:469] duration metric: took 3.932583624s to extract the tarball
	I0819 19:58:39.681878  481009 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:58:39.729354  481009 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:58:39.765283  481009 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0819 19:58:39.765311  481009 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 19:58:39.765379  481009 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 19:58:39.765404  481009 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:58:39.765437  481009 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 19:58:39.765454  481009 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 19:58:39.765378  481009 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:58:39.765513  481009 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 19:58:39.765511  481009 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 19:58:39.765496  481009 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 19:58:39.767023  481009 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 19:58:39.767060  481009 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 19:58:39.767019  481009 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 19:58:39.767096  481009 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 19:58:39.767027  481009 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:58:39.767140  481009 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 19:58:39.767519  481009 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 19:58:39.767542  481009 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:58:40.132027  480165 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 19:58:40.132090  480165 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:58:40.132182  480165 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:58:40.132297  480165 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:58:40.132417  480165 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 19:58:40.132497  480165 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:58:40.133861  480165 out.go:235]   - Generating certificates and keys ...
	I0819 19:58:40.133979  480165 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:58:40.134056  480165 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:58:40.134140  480165 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 19:58:40.134217  480165 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 19:58:40.134291  480165 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 19:58:40.134346  480165 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 19:58:40.134407  480165 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 19:58:40.134554  480165 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-228973 localhost] and IPs [192.168.72.176 127.0.0.1 ::1]
	I0819 19:58:40.134615  480165 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 19:58:40.134766  480165 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-228973 localhost] and IPs [192.168.72.176 127.0.0.1 ::1]
	I0819 19:58:40.134850  480165 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 19:58:40.134923  480165 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 19:58:40.134975  480165 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 19:58:40.135039  480165 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:58:40.135100  480165 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:58:40.135165  480165 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 19:58:40.135227  480165 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:58:40.135303  480165 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:58:40.135367  480165 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:58:40.135458  480165 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:58:40.135541  480165 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:58:40.137056  480165 out.go:235]   - Booting up control plane ...
	I0819 19:58:40.137208  480165 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:58:40.137307  480165 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:58:40.137384  480165 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:58:40.137553  480165 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:58:40.137696  480165 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:58:40.137751  480165 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:58:40.137917  480165 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 19:58:40.138037  480165 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 19:58:40.138105  480165 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.633408ms
	I0819 19:58:40.138187  480165 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 19:58:40.138256  480165 kubeadm.go:310] [api-check] The API server is healthy after 6.001455262s
	I0819 19:58:40.138399  480165 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 19:58:40.138540  480165 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 19:58:40.138606  480165 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 19:58:40.138836  480165 kubeadm.go:310] [mark-control-plane] Marking the node cert-expiration-228973 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 19:58:40.138901  480165 kubeadm.go:310] [bootstrap-token] Using token: zlroav.q8awq3g8noywle77
	I0819 19:58:40.140385  480165 out.go:235]   - Configuring RBAC rules ...
	I0819 19:58:40.140536  480165 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 19:58:40.140690  480165 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 19:58:40.140927  480165 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 19:58:40.141168  480165 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 19:58:40.141303  480165 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 19:58:40.141411  480165 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 19:58:40.141592  480165 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 19:58:40.141677  480165 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 19:58:40.141737  480165 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 19:58:40.141742  480165 kubeadm.go:310] 
	I0819 19:58:40.141825  480165 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 19:58:40.141831  480165 kubeadm.go:310] 
	I0819 19:58:40.141934  480165 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 19:58:40.141939  480165 kubeadm.go:310] 
	I0819 19:58:40.141967  480165 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 19:58:40.142074  480165 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 19:58:40.142135  480165 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 19:58:40.142141  480165 kubeadm.go:310] 
	I0819 19:58:40.142222  480165 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 19:58:40.142229  480165 kubeadm.go:310] 
	I0819 19:58:40.142289  480165 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 19:58:40.142294  480165 kubeadm.go:310] 
	I0819 19:58:40.142352  480165 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 19:58:40.142448  480165 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 19:58:40.142531  480165 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 19:58:40.142536  480165 kubeadm.go:310] 
	I0819 19:58:40.142632  480165 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 19:58:40.142746  480165 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 19:58:40.142754  480165 kubeadm.go:310] 
	I0819 19:58:40.142868  480165 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zlroav.q8awq3g8noywle77 \
	I0819 19:58:40.142990  480165 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f \
	I0819 19:58:40.143022  480165 kubeadm.go:310] 	--control-plane 
	I0819 19:58:40.143026  480165 kubeadm.go:310] 
	I0819 19:58:40.143122  480165 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 19:58:40.143127  480165 kubeadm.go:310] 
	I0819 19:58:40.143246  480165 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zlroav.q8awq3g8noywle77 \
	I0819 19:58:40.143371  480165 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c212633e662cd99da3d8606bd6a84af59244c27869e130712d5ac9ab2c6f 
	I0819 19:58:40.143400  480165 cni.go:84] Creating CNI manager for ""
	I0819 19:58:40.143409  480165 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:58:40.144947  480165 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:58:40.507535  481365 start.go:364] duration metric: took 7.262725011s to acquireMachinesLock for "NoKubernetes-803941"
	I0819 19:58:40.507590  481365 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-803941 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-803941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:58:40.507701  481365 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 19:58:40.146212  480165 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:58:40.160991  480165 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:58:40.190866  480165 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:58:40.190995  480165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:58:40.191019  480165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-228973 minikube.k8s.io/updated_at=2024_08_19T19_58_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8 minikube.k8s.io/name=cert-expiration-228973 minikube.k8s.io/primary=true
	I0819 19:58:40.563567  480165 ops.go:34] apiserver oom_adj: -16
	I0819 19:58:40.563615  480165 kubeadm.go:1113] duration metric: took 372.695701ms to wait for elevateKubeSystemPrivileges
	I0819 19:58:40.563632  480165 kubeadm.go:394] duration metric: took 13.560252715s to StartCluster
	I0819 19:58:40.563654  480165 settings.go:142] acquiring lock: {Name:mkc71def6be9966e5f008a22161bf3ed26f482b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:40.563726  480165 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:58:40.565397  480165 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/kubeconfig: {Name:mk1ae3aa741bb2460fc0d48f80867b991b4a0677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:40.565715  480165 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.176 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:58:40.565914  480165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 19:58:40.566193  480165 config.go:182] Loaded profile config "cert-expiration-228973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:58:40.566249  480165 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:58:40.566309  480165 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-228973"
	I0819 19:58:40.566344  480165 addons.go:234] Setting addon storage-provisioner=true in "cert-expiration-228973"
	I0819 19:58:40.566374  480165 host.go:66] Checking if "cert-expiration-228973" exists ...
	I0819 19:58:40.566788  480165 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:40.566810  480165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:40.566996  480165 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-228973"
	I0819 19:58:40.567024  480165 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-228973"
	I0819 19:58:40.567424  480165 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:40.567448  480165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:40.571221  480165 out.go:177] * Verifying Kubernetes components...
	I0819 19:58:40.572692  480165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:58:40.590073  480165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39877
	I0819 19:58:40.590543  480165 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:40.591122  480165 main.go:141] libmachine: Using API Version  1
	I0819 19:58:40.591136  480165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:40.591550  480165 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:40.591767  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetState
	I0819 19:58:40.595216  480165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44859
	I0819 19:58:40.595628  480165 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:40.596185  480165 main.go:141] libmachine: Using API Version  1
	I0819 19:58:40.596197  480165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:40.596605  480165 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:40.597231  480165 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:40.597265  480165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:40.603904  480165 addons.go:234] Setting addon default-storageclass=true in "cert-expiration-228973"
	I0819 19:58:40.603942  480165 host.go:66] Checking if "cert-expiration-228973" exists ...
	I0819 19:58:40.604392  480165 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:40.604438  480165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:40.623382  480165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0819 19:58:40.624023  480165 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:40.624713  480165 main.go:141] libmachine: Using API Version  1
	I0819 19:58:40.624726  480165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:40.625089  480165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35059
	I0819 19:58:40.625271  480165 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:40.625495  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetState
	I0819 19:58:40.625731  480165 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:40.626233  480165 main.go:141] libmachine: Using API Version  1
	I0819 19:58:40.626246  480165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:40.626717  480165 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:40.627550  480165 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:40.627583  480165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:40.629971  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .DriverName
	I0819 19:58:40.631759  480165 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:58:40.633505  480165 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:58:40.633521  480165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:58:40.633547  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetSSHHostname
	I0819 19:58:40.641887  480165 main.go:141] libmachine: (cert-expiration-228973) DBG | domain cert-expiration-228973 has defined MAC address 52:54:00:b8:48:94 in network mk-cert-expiration-228973
	I0819 19:58:40.660555  480165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43501
	I0819 19:58:40.661380  480165 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:40.662049  480165 main.go:141] libmachine: Using API Version  1
	I0819 19:58:40.662064  480165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:40.662519  480165 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:40.662697  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetState
	I0819 19:58:40.671048  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .DriverName
	I0819 19:58:40.671370  480165 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:58:40.671383  480165 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:58:40.671406  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetSSHHostname
	I0819 19:58:40.675791  480165 main.go:141] libmachine: (cert-expiration-228973) DBG | domain cert-expiration-228973 has defined MAC address 52:54:00:b8:48:94 in network mk-cert-expiration-228973
	I0819 19:58:40.681714  480165 main.go:141] libmachine: (cert-expiration-228973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:48:94", ip: ""} in network mk-cert-expiration-228973: {Iface:virbr4 ExpiryTime:2024-08-19 20:58:08 +0000 UTC Type:0 Mac:52:54:00:b8:48:94 Iaid: IPaddr:192.168.72.176 Prefix:24 Hostname:cert-expiration-228973 Clientid:01:52:54:00:b8:48:94}
	I0819 19:58:40.681739  480165 main.go:141] libmachine: (cert-expiration-228973) DBG | domain cert-expiration-228973 has defined IP address 192.168.72.176 and MAC address 52:54:00:b8:48:94 in network mk-cert-expiration-228973
	I0819 19:58:40.681865  480165 main.go:141] libmachine: (cert-expiration-228973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:48:94", ip: ""} in network mk-cert-expiration-228973: {Iface:virbr4 ExpiryTime:2024-08-19 20:58:08 +0000 UTC Type:0 Mac:52:54:00:b8:48:94 Iaid: IPaddr:192.168.72.176 Prefix:24 Hostname:cert-expiration-228973 Clientid:01:52:54:00:b8:48:94}
	I0819 19:58:40.681886  480165 main.go:141] libmachine: (cert-expiration-228973) DBG | domain cert-expiration-228973 has defined IP address 192.168.72.176 and MAC address 52:54:00:b8:48:94 in network mk-cert-expiration-228973
	I0819 19:58:40.682453  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetSSHPort
	I0819 19:58:40.682496  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetSSHPort
	I0819 19:58:40.682741  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetSSHKeyPath
	I0819 19:58:40.682780  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetSSHKeyPath
	I0819 19:58:40.682892  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetSSHUsername
	I0819 19:58:40.682929  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .GetSSHUsername
	I0819 19:58:40.683035  480165 sshutil.go:53] new ssh client: &{IP:192.168.72.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/cert-expiration-228973/id_rsa Username:docker}
	I0819 19:58:40.683069  480165 sshutil.go:53] new ssh client: &{IP:192.168.72.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/cert-expiration-228973/id_rsa Username:docker}
	I0819 19:58:40.846804  480165 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:58:40.847012  480165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 19:58:40.971166  480165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:58:40.984333  480165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:58:41.450229  480165 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0819 19:58:41.450394  480165 main.go:141] libmachine: Making call to close driver server
	I0819 19:58:41.450408  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .Close
	I0819 19:58:41.451593  480165 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:58:41.451656  480165 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:58:41.452584  480165 main.go:141] libmachine: (cert-expiration-228973) DBG | Closing plugin on server side
	I0819 19:58:41.452662  480165 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:58:41.452684  480165 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:58:41.452693  480165 main.go:141] libmachine: Making call to close driver server
	I0819 19:58:41.452700  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .Close
	I0819 19:58:41.453029  480165 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:58:41.453047  480165 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:58:41.470367  480165 main.go:141] libmachine: Making call to close driver server
	I0819 19:58:41.470383  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .Close
	I0819 19:58:41.470814  480165 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:58:41.470824  480165 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:58:41.827582  480165 main.go:141] libmachine: Making call to close driver server
	I0819 19:58:41.827607  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .Close
	I0819 19:58:41.827718  480165 api_server.go:72] duration metric: took 1.261974394s to wait for apiserver process to appear ...
	I0819 19:58:41.827729  480165 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:58:41.827746  480165 api_server.go:253] Checking apiserver healthz at https://192.168.72.176:8443/healthz ...
	I0819 19:58:41.830096  480165 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:58:41.830081  480165 main.go:141] libmachine: (cert-expiration-228973) DBG | Closing plugin on server side
	I0819 19:58:41.830112  480165 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:58:41.830122  480165 main.go:141] libmachine: Making call to close driver server
	I0819 19:58:41.830131  480165 main.go:141] libmachine: (cert-expiration-228973) Calling .Close
	I0819 19:58:41.830544  480165 main.go:141] libmachine: (cert-expiration-228973) DBG | Closing plugin on server side
	I0819 19:58:41.830579  480165 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:58:41.830585  480165 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:58:41.833022  480165 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0819 19:58:41.834532  480165 addons.go:510] duration metric: took 1.26828143s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0819 19:58:41.846100  480165 api_server.go:279] https://192.168.72.176:8443/healthz returned 200:
	ok
	I0819 19:58:41.847765  480165 api_server.go:141] control plane version: v1.31.0
	I0819 19:58:41.847786  480165 api_server.go:131] duration metric: took 20.051025ms to wait for apiserver health ...
	I0819 19:58:41.847795  480165 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:58:41.855405  480165 system_pods.go:59] 5 kube-system pods found
	I0819 19:58:41.855436  480165 system_pods.go:61] "etcd-cert-expiration-228973" [7d26eeae-0512-409b-95df-c64266cd3b8a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:58:41.855447  480165 system_pods.go:61] "kube-apiserver-cert-expiration-228973" [b378412e-bc19-4356-9790-6ca9cbc293fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:58:41.855458  480165 system_pods.go:61] "kube-controller-manager-cert-expiration-228973" [6b3a63db-cf89-4eeb-9ad9-baab6d35b0ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:58:41.855466  480165 system_pods.go:61] "kube-scheduler-cert-expiration-228973" [5441b59b-f632-4ac0-ac19-3e51200f416e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:58:41.855472  480165 system_pods.go:61] "storage-provisioner" [de2418b7-a74d-4ce1-bf65-bbb72aecc537] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0819 19:58:41.855480  480165 system_pods.go:74] duration metric: took 7.678476ms to wait for pod list to return data ...
	I0819 19:58:41.855502  480165 kubeadm.go:582] duration metric: took 1.289757145s to wait for: map[apiserver:true system_pods:true]
	I0819 19:58:41.855518  480165 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:58:41.860294  480165 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:58:41.860310  480165 node_conditions.go:123] node cpu capacity is 2
	I0819 19:58:41.860319  480165 node_conditions.go:105] duration metric: took 4.798282ms to run NodePressure ...
	I0819 19:58:41.860330  480165 start.go:241] waiting for startup goroutines ...
	I0819 19:58:41.956408  480165 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-228973" context rescaled to 1 replicas
	I0819 19:58:41.956441  480165 start.go:246] waiting for cluster config update ...
	I0819 19:58:41.956451  480165 start.go:255] writing updated cluster config ...
	I0819 19:58:41.956715  480165 ssh_runner.go:195] Run: rm -f paused
	I0819 19:58:42.041300  480165 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:58:42.043158  480165 out.go:177] * Done! kubectl is now configured to use "cert-expiration-228973" cluster and "default" namespace by default
	I0819 19:58:40.509331  481365 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0819 19:58:40.509590  481365 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:40.509624  481365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:40.530931  481365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
	I0819 19:58:40.531598  481365 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:40.532164  481365 main.go:141] libmachine: Using API Version  1
	I0819 19:58:40.532176  481365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:40.532544  481365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:40.532730  481365 main.go:141] libmachine: (NoKubernetes-803941) Calling .GetMachineName
	I0819 19:58:40.532834  481365 main.go:141] libmachine: (NoKubernetes-803941) Calling .DriverName
	I0819 19:58:40.532960  481365 start.go:159] libmachine.API.Create for "NoKubernetes-803941" (driver="kvm2")
	I0819 19:58:40.532974  481365 client.go:168] LocalClient.Create starting
	I0819 19:58:40.533021  481365 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem
	I0819 19:58:40.533063  481365 main.go:141] libmachine: Decoding PEM data...
	I0819 19:58:40.533078  481365 main.go:141] libmachine: Parsing certificate...
	I0819 19:58:40.533266  481365 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem
	I0819 19:58:40.533292  481365 main.go:141] libmachine: Decoding PEM data...
	I0819 19:58:40.533311  481365 main.go:141] libmachine: Parsing certificate...
	I0819 19:58:40.533331  481365 main.go:141] libmachine: Running pre-create checks...
	I0819 19:58:40.533339  481365 main.go:141] libmachine: (NoKubernetes-803941) Calling .PreCreateCheck
	I0819 19:58:40.533877  481365 main.go:141] libmachine: (NoKubernetes-803941) Calling .GetConfigRaw
	I0819 19:58:40.534523  481365 main.go:141] libmachine: Creating machine...
	I0819 19:58:40.534534  481365 main.go:141] libmachine: (NoKubernetes-803941) Calling .Create
	I0819 19:58:40.534718  481365 main.go:141] libmachine: (NoKubernetes-803941) Creating KVM machine...
	I0819 19:58:40.536082  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | found existing default KVM network
	I0819 19:58:40.537730  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:40.537522  481419 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:3c:88:e7} reservation:<nil>}
	I0819 19:58:40.539077  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:40.538969  481419 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fb:75:fb} reservation:<nil>}
	I0819 19:58:40.540850  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:40.540741  481419 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000205fc0}
	I0819 19:58:40.540899  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | created network xml: 
	I0819 19:58:40.540910  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | <network>
	I0819 19:58:40.540919  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG |   <name>mk-NoKubernetes-803941</name>
	I0819 19:58:40.540935  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG |   <dns enable='no'/>
	I0819 19:58:40.540951  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG |   
	I0819 19:58:40.540963  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0819 19:58:40.540970  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG |     <dhcp>
	I0819 19:58:40.540978  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0819 19:58:40.540986  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG |     </dhcp>
	I0819 19:58:40.540992  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG |   </ip>
	I0819 19:58:40.540998  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG |   
	I0819 19:58:40.541003  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | </network>
	I0819 19:58:40.541012  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | 
	I0819 19:58:40.548622  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | trying to create private KVM network mk-NoKubernetes-803941 192.168.61.0/24...
	I0819 19:58:40.689282  481365 main.go:141] libmachine: (NoKubernetes-803941) Setting up store path in /home/jenkins/minikube-integration/19423-430949/.minikube/machines/NoKubernetes-803941 ...
	I0819 19:58:40.689306  481365 main.go:141] libmachine: (NoKubernetes-803941) Building disk image from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 19:58:40.689328  481365 main.go:141] libmachine: (NoKubernetes-803941) Downloading /home/jenkins/minikube-integration/19423-430949/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 19:58:40.689345  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | private KVM network mk-NoKubernetes-803941 192.168.61.0/24 created
	I0819 19:58:40.689360  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:40.681264  481419 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:58:41.057317  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:41.053122  481419 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/NoKubernetes-803941/id_rsa...
	I0819 19:58:41.138052  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:41.137928  481419 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/NoKubernetes-803941/NoKubernetes-803941.rawdisk...
	I0819 19:58:41.138189  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Writing magic tar header
	I0819 19:58:41.138211  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Writing SSH key tar header
	I0819 19:58:41.138377  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:41.138308  481419 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/NoKubernetes-803941 ...
	I0819 19:58:41.138485  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines/NoKubernetes-803941
	I0819 19:58:41.138506  481365 main.go:141] libmachine: (NoKubernetes-803941) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines/NoKubernetes-803941 (perms=drwx------)
	I0819 19:58:41.138531  481365 main.go:141] libmachine: (NoKubernetes-803941) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube/machines (perms=drwxr-xr-x)
	I0819 19:58:41.138540  481365 main.go:141] libmachine: (NoKubernetes-803941) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949/.minikube (perms=drwxr-xr-x)
	I0819 19:58:41.138549  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube/machines
	I0819 19:58:41.138568  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:58:41.138576  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-430949
	I0819 19:58:41.138588  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 19:58:41.138595  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Checking permissions on dir: /home/jenkins
	I0819 19:58:41.138605  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Checking permissions on dir: /home
	I0819 19:58:41.138615  481365 main.go:141] libmachine: (NoKubernetes-803941) Setting executable bit set on /home/jenkins/minikube-integration/19423-430949 (perms=drwxrwxr-x)
	I0819 19:58:41.138622  481365 main.go:141] libmachine: (NoKubernetes-803941) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 19:58:41.138631  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | Skipping /home - not owner
	I0819 19:58:41.138637  481365 main.go:141] libmachine: (NoKubernetes-803941) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 19:58:41.138645  481365 main.go:141] libmachine: (NoKubernetes-803941) Creating domain...
	I0819 19:58:41.141098  481365 main.go:141] libmachine: (NoKubernetes-803941) define libvirt domain using xml: 
	I0819 19:58:41.141113  481365 main.go:141] libmachine: (NoKubernetes-803941) <domain type='kvm'>
	I0819 19:58:41.141119  481365 main.go:141] libmachine: (NoKubernetes-803941)   <name>NoKubernetes-803941</name>
	I0819 19:58:41.141124  481365 main.go:141] libmachine: (NoKubernetes-803941)   <memory unit='MiB'>6000</memory>
	I0819 19:58:41.141144  481365 main.go:141] libmachine: (NoKubernetes-803941)   <vcpu>2</vcpu>
	I0819 19:58:41.141150  481365 main.go:141] libmachine: (NoKubernetes-803941)   <features>
	I0819 19:58:41.141156  481365 main.go:141] libmachine: (NoKubernetes-803941)     <acpi/>
	I0819 19:58:41.141163  481365 main.go:141] libmachine: (NoKubernetes-803941)     <apic/>
	I0819 19:58:41.141169  481365 main.go:141] libmachine: (NoKubernetes-803941)     <pae/>
	I0819 19:58:41.141174  481365 main.go:141] libmachine: (NoKubernetes-803941)     
	I0819 19:58:41.141180  481365 main.go:141] libmachine: (NoKubernetes-803941)   </features>
	I0819 19:58:41.141186  481365 main.go:141] libmachine: (NoKubernetes-803941)   <cpu mode='host-passthrough'>
	I0819 19:58:41.141192  481365 main.go:141] libmachine: (NoKubernetes-803941)   
	I0819 19:58:41.141197  481365 main.go:141] libmachine: (NoKubernetes-803941)   </cpu>
	I0819 19:58:41.141203  481365 main.go:141] libmachine: (NoKubernetes-803941)   <os>
	I0819 19:58:41.141208  481365 main.go:141] libmachine: (NoKubernetes-803941)     <type>hvm</type>
	I0819 19:58:41.141215  481365 main.go:141] libmachine: (NoKubernetes-803941)     <boot dev='cdrom'/>
	I0819 19:58:41.141221  481365 main.go:141] libmachine: (NoKubernetes-803941)     <boot dev='hd'/>
	I0819 19:58:41.141227  481365 main.go:141] libmachine: (NoKubernetes-803941)     <bootmenu enable='no'/>
	I0819 19:58:41.141232  481365 main.go:141] libmachine: (NoKubernetes-803941)   </os>
	I0819 19:58:41.141240  481365 main.go:141] libmachine: (NoKubernetes-803941)   <devices>
	I0819 19:58:41.141247  481365 main.go:141] libmachine: (NoKubernetes-803941)     <disk type='file' device='cdrom'>
	I0819 19:58:41.141258  481365 main.go:141] libmachine: (NoKubernetes-803941)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/NoKubernetes-803941/boot2docker.iso'/>
	I0819 19:58:41.141272  481365 main.go:141] libmachine: (NoKubernetes-803941)       <target dev='hdc' bus='scsi'/>
	I0819 19:58:41.141279  481365 main.go:141] libmachine: (NoKubernetes-803941)       <readonly/>
	I0819 19:58:41.141283  481365 main.go:141] libmachine: (NoKubernetes-803941)     </disk>
	I0819 19:58:41.141299  481365 main.go:141] libmachine: (NoKubernetes-803941)     <disk type='file' device='disk'>
	I0819 19:58:41.141308  481365 main.go:141] libmachine: (NoKubernetes-803941)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 19:58:41.141319  481365 main.go:141] libmachine: (NoKubernetes-803941)       <source file='/home/jenkins/minikube-integration/19423-430949/.minikube/machines/NoKubernetes-803941/NoKubernetes-803941.rawdisk'/>
	I0819 19:58:41.141324  481365 main.go:141] libmachine: (NoKubernetes-803941)       <target dev='hda' bus='virtio'/>
	I0819 19:58:41.141331  481365 main.go:141] libmachine: (NoKubernetes-803941)     </disk>
	I0819 19:58:41.141337  481365 main.go:141] libmachine: (NoKubernetes-803941)     <interface type='network'>
	I0819 19:58:41.141345  481365 main.go:141] libmachine: (NoKubernetes-803941)       <source network='mk-NoKubernetes-803941'/>
	I0819 19:58:41.141351  481365 main.go:141] libmachine: (NoKubernetes-803941)       <model type='virtio'/>
	I0819 19:58:41.141358  481365 main.go:141] libmachine: (NoKubernetes-803941)     </interface>
	I0819 19:58:41.141364  481365 main.go:141] libmachine: (NoKubernetes-803941)     <interface type='network'>
	I0819 19:58:41.141372  481365 main.go:141] libmachine: (NoKubernetes-803941)       <source network='default'/>
	I0819 19:58:41.141378  481365 main.go:141] libmachine: (NoKubernetes-803941)       <model type='virtio'/>
	I0819 19:58:41.141385  481365 main.go:141] libmachine: (NoKubernetes-803941)     </interface>
	I0819 19:58:41.141390  481365 main.go:141] libmachine: (NoKubernetes-803941)     <serial type='pty'>
	I0819 19:58:41.141397  481365 main.go:141] libmachine: (NoKubernetes-803941)       <target port='0'/>
	I0819 19:58:41.141403  481365 main.go:141] libmachine: (NoKubernetes-803941)     </serial>
	I0819 19:58:41.141409  481365 main.go:141] libmachine: (NoKubernetes-803941)     <console type='pty'>
	I0819 19:58:41.141414  481365 main.go:141] libmachine: (NoKubernetes-803941)       <target type='serial' port='0'/>
	I0819 19:58:41.141419  481365 main.go:141] libmachine: (NoKubernetes-803941)     </console>
	I0819 19:58:41.141424  481365 main.go:141] libmachine: (NoKubernetes-803941)     <rng model='virtio'>
	I0819 19:58:41.141432  481365 main.go:141] libmachine: (NoKubernetes-803941)       <backend model='random'>/dev/random</backend>
	I0819 19:58:41.141438  481365 main.go:141] libmachine: (NoKubernetes-803941)     </rng>
	I0819 19:58:41.141443  481365 main.go:141] libmachine: (NoKubernetes-803941)     
	I0819 19:58:41.141448  481365 main.go:141] libmachine: (NoKubernetes-803941)     
	I0819 19:58:41.141453  481365 main.go:141] libmachine: (NoKubernetes-803941)   </devices>
	I0819 19:58:41.141458  481365 main.go:141] libmachine: (NoKubernetes-803941) </domain>
	I0819 19:58:41.141469  481365 main.go:141] libmachine: (NoKubernetes-803941) 
	I0819 19:58:41.148013  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:f6:ea:95 in network default
	I0819 19:58:41.148424  481365 main.go:141] libmachine: (NoKubernetes-803941) Ensuring networks are active...
	I0819 19:58:41.148447  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:40:8f:2e in network mk-NoKubernetes-803941
	I0819 19:58:41.150010  481365 main.go:141] libmachine: (NoKubernetes-803941) Ensuring network default is active
	I0819 19:58:41.150483  481365 main.go:141] libmachine: (NoKubernetes-803941) Ensuring network mk-NoKubernetes-803941 is active
	I0819 19:58:41.151535  481365 main.go:141] libmachine: (NoKubernetes-803941) Getting domain xml...
	I0819 19:58:41.152576  481365 main.go:141] libmachine: (NoKubernetes-803941) Creating domain...
	I0819 19:58:42.936550  481365 main.go:141] libmachine: (NoKubernetes-803941) Waiting to get IP...
	I0819 19:58:42.937470  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:40:8f:2e in network mk-NoKubernetes-803941
	I0819 19:58:42.938079  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | unable to find current IP address of domain NoKubernetes-803941 in network mk-NoKubernetes-803941
	I0819 19:58:42.938140  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:42.938073  481419 retry.go:31] will retry after 233.031281ms: waiting for machine to come up
	I0819 19:58:40.212535  481208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:58:40.212579  481208 machine.go:96] duration metric: took 8.823594081s to provisionDockerMachine
	I0819 19:58:40.212595  481208 start.go:293] postStartSetup for "pause-232147" (driver="kvm2")
	I0819 19:58:40.212609  481208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:58:40.212642  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:40.213057  481208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:58:40.213092  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:40.216311  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.216817  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:40.216844  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.217076  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:40.217330  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:40.217515  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:40.217682  481208 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/pause-232147/id_rsa Username:docker}
	I0819 19:58:40.316283  481208 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:58:40.322399  481208 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:58:40.322447  481208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/addons for local assets ...
	I0819 19:58:40.322557  481208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-430949/.minikube/files for local assets ...
	I0819 19:58:40.322676  481208 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem -> 4381592.pem in /etc/ssl/certs
	I0819 19:58:40.322820  481208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:58:40.337792  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:58:40.372596  481208 start.go:296] duration metric: took 159.984571ms for postStartSetup
	I0819 19:58:40.372650  481208 fix.go:56] duration metric: took 9.01375792s for fixHost
	I0819 19:58:40.372680  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:40.376119  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.376610  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:40.376639  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.376989  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:40.377312  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:40.377518  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:40.377676  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:40.377858  481208 main.go:141] libmachine: Using SSH client type: native
	I0819 19:58:40.378087  481208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0819 19:58:40.378105  481208 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:58:40.507374  481208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724097520.497291171
	
	I0819 19:58:40.507408  481208 fix.go:216] guest clock: 1724097520.497291171
	I0819 19:58:40.507418  481208 fix.go:229] Guest: 2024-08-19 19:58:40.497291171 +0000 UTC Remote: 2024-08-19 19:58:40.372656161 +0000 UTC m=+11.987187457 (delta=124.63501ms)
	I0819 19:58:40.507448  481208 fix.go:200] guest clock delta is within tolerance: 124.63501ms
	I0819 19:58:40.507456  481208 start.go:83] releasing machines lock for "pause-232147", held for 9.148597464s
	I0819 19:58:40.507935  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:40.508287  481208 main.go:141] libmachine: (pause-232147) Calling .GetIP
	I0819 19:58:40.513009  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.513574  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:40.513608  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.514007  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:40.514704  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:40.514942  481208 main.go:141] libmachine: (pause-232147) Calling .DriverName
	I0819 19:58:40.515039  481208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:58:40.515086  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:40.515181  481208 ssh_runner.go:195] Run: cat /version.json
	I0819 19:58:40.515194  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHHostname
	I0819 19:58:40.519584  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.519937  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.520469  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:40.520648  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.520677  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:40.520717  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:40.521387  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:40.521428  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHPort
	I0819 19:58:40.521678  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:40.521854  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:40.522053  481208 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/pause-232147/id_rsa Username:docker}
	I0819 19:58:40.522692  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHKeyPath
	I0819 19:58:40.522868  481208 main.go:141] libmachine: (pause-232147) Calling .GetSSHUsername
	I0819 19:58:40.523041  481208 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/pause-232147/id_rsa Username:docker}
	I0819 19:58:40.639459  481208 ssh_runner.go:195] Run: systemctl --version
	I0819 19:58:40.655718  481208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:58:40.840962  481208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:58:40.853198  481208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:58:40.853277  481208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:58:40.868796  481208 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 19:58:40.868827  481208 start.go:495] detecting cgroup driver to use...
	I0819 19:58:40.868899  481208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:58:40.900245  481208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:58:40.920400  481208 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:58:40.920463  481208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:58:40.938443  481208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:58:40.955169  481208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:58:41.146319  481208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:58:41.353301  481208 docker.go:233] disabling docker service ...
	I0819 19:58:41.353400  481208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:58:41.387771  481208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:58:41.412948  481208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:58:41.554623  481208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:58:41.861017  481208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:58:41.982330  481208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:58:42.076111  481208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:58:42.076186  481208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.109280  481208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:58:42.109371  481208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.145729  481208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.189101  481208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.240615  481208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:58:42.274951  481208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.289026  481208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.305852  481208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:58:42.326459  481208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:58:42.346162  481208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:58:42.356878  481208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:58:42.608928  481208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:58:43.169953  481208 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:58:43.170036  481208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:58:43.176235  481208 start.go:563] Will wait 60s for crictl version
	I0819 19:58:43.176307  481208 ssh_runner.go:195] Run: which crictl
	I0819 19:58:43.180628  481208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:58:43.216848  481208 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:58:43.216956  481208 ssh_runner.go:195] Run: crio --version
	I0819 19:58:43.253336  481208 ssh_runner.go:195] Run: crio --version
	I0819 19:58:43.293348  481208 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:58:43.294479  481208 main.go:141] libmachine: (pause-232147) Calling .GetIP
	I0819 19:58:43.297894  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:43.298326  481208 main.go:141] libmachine: (pause-232147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:9e:6b", ip: ""} in network mk-pause-232147: {Iface:virbr2 ExpiryTime:2024-08-19 20:57:44 +0000 UTC Type:0 Mac:52:54:00:84:9e:6b Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-232147 Clientid:01:52:54:00:84:9e:6b}
	I0819 19:58:43.298362  481208 main.go:141] libmachine: (pause-232147) DBG | domain pause-232147 has defined IP address 192.168.50.125 and MAC address 52:54:00:84:9e:6b in network mk-pause-232147
	I0819 19:58:43.298669  481208 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 19:58:43.303270  481208 kubeadm.go:883] updating cluster {Name:pause-232147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:pause-232147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:58:43.303482  481208 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:58:43.303557  481208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:58:43.362917  481208 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:58:43.362952  481208 crio.go:433] Images already preloaded, skipping extraction
	I0819 19:58:43.363019  481208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:58:43.405430  481208 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:58:43.405463  481208 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:58:43.405474  481208 kubeadm.go:934] updating node { 192.168.50.125 8443 v1.31.0 crio true true} ...
	I0819 19:58:43.405617  481208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-232147 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:pause-232147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:58:43.405717  481208 ssh_runner.go:195] Run: crio config
	I0819 19:58:39.926453  481009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0819 19:58:39.931939  481009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0819 19:58:39.935237  481009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0819 19:58:39.936632  481009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 19:58:39.943545  481009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:58:39.958493  481009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0819 19:58:39.960752  481009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0819 19:58:40.076539  481009 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0819 19:58:40.076654  481009 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0819 19:58:40.076724  481009 ssh_runner.go:195] Run: which crictl
	I0819 19:58:40.117605  481009 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693" in container runtime
	I0819 19:58:40.117686  481009 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 19:58:40.117691  481009 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0819 19:58:40.117842  481009 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0819 19:58:40.117890  481009 ssh_runner.go:195] Run: which crictl
	I0819 19:58:40.117903  481009 ssh_runner.go:195] Run: which crictl
	I0819 19:58:40.158070  481009 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d" in container runtime
	I0819 19:58:40.158176  481009 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 19:58:40.158143  481009 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0819 19:58:40.158252  481009 ssh_runner.go:195] Run: which crictl
	I0819 19:58:40.158284  481009 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:58:40.158334  481009 ssh_runner.go:195] Run: which crictl
	I0819 19:58:40.177163  481009 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18" in container runtime
	I0819 19:58:40.177213  481009 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237" in container runtime
	I0819 19:58:40.177225  481009 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 19:58:40.177231  481009 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 19:58:40.177277  481009 ssh_runner.go:195] Run: which crictl
	I0819 19:58:40.177294  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0819 19:58:40.177312  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0819 19:58:40.177322  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0819 19:58:40.177277  481009 ssh_runner.go:195] Run: which crictl
	I0819 19:58:40.177363  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 19:58:40.177400  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:58:40.225534  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0819 19:58:40.300041  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0819 19:58:40.300118  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:58:40.300188  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0819 19:58:40.300264  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0819 19:58:40.300335  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 19:58:40.300376  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0819 19:58:40.300461  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0819 19:58:40.384846  481009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:58:40.454967  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0819 19:58:40.455109  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 19:58:40.455138  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0819 19:58:40.455267  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0819 19:58:40.455354  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0819 19:58:40.586167  481009 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.1
	I0819 19:58:40.586264  481009 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0819 19:58:40.586344  481009 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0819 19:58:40.613844  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0819 19:58:40.613929  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0819 19:58:40.613997  481009 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0819 19:58:40.614063  481009 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0819 19:58:40.614109  481009 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0819 19:58:40.614122  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (102146048 bytes)
	I0819 19:58:40.614165  481009 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 19:58:40.614205  481009 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0819 19:58:40.614235  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 19:58:40.723333  481009 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0819 19:58:40.723432  481009 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.1
	I0819 19:58:40.723472  481009 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.1
	I0819 19:58:40.723505  481009 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0819 19:58:40.723521  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (311296 bytes)
	I0819 19:58:40.723617  481009 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0819 19:58:40.723630  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
	I0819 19:58:40.801630  481009 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0819 19:58:40.801710  481009 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0819 19:58:41.142214  481009 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0819 19:58:41.142252  481009 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0819 19:58:41.142302  481009 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0819 19:58:41.852181  481009 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0819 19:58:41.852232  481009 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0819 19:58:41.852290  481009 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0819 19:58:44.313025  481009 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.460697847s)
	I0819 19:58:44.313059  481009 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0819 19:58:44.313108  481009 cache_images.go:92] duration metric: took 4.547781665s to LoadCachedImages
	W0819 19:58:44.313229  481009 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-430949/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0819 19:58:44.313247  481009 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.24.1 crio true true} ...
	I0819 19:58:44.313370  481009 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=running-upgrade-814149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-814149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:58:44.313454  481009 ssh_runner.go:195] Run: crio config
	I0819 19:58:44.372482  481009 cni.go:84] Creating CNI manager for ""
	I0819 19:58:44.372506  481009 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:58:44.372515  481009 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:58:44.372534  481009 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-814149 NodeName:running-upgrade-814149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:58:44.372722  481009 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "running-upgrade-814149"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:58:44.372793  481009 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0819 19:58:44.383107  481009 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:58:44.383179  481009 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:58:44.393546  481009 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0819 19:58:44.410718  481009 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:58:44.428280  481009 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0819 19:58:44.462324  481009 ssh_runner.go:195] Run: grep 192.168.39.238	control-plane.minikube.internal$ /etc/hosts
	I0819 19:58:44.466740  481009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:58:44.643155  481009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:58:44.662762  481009 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149 for IP: 192.168.39.238
	I0819 19:58:44.662789  481009 certs.go:194] generating shared ca certs ...
	I0819 19:58:44.662810  481009 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:44.663001  481009 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 19:58:44.663057  481009 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 19:58:44.663067  481009 certs.go:256] generating profile certs ...
	I0819 19:58:44.663167  481009 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/client.key
	I0819 19:58:44.663195  481009 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.key.2307c246
	I0819 19:58:44.663211  481009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.crt.2307c246 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238]
	I0819 19:58:45.024832  481009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.crt.2307c246 ...
	I0819 19:58:45.024866  481009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.crt.2307c246: {Name:mkbd45db25145fbea141d679e4e3b5e94a91e521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:45.025040  481009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.key.2307c246 ...
	I0819 19:58:45.025052  481009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.key.2307c246: {Name:mk6380366927801ea722cd807662f92ced7d318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:45.025120  481009 certs.go:381] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.crt.2307c246 -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.crt
	I0819 19:58:45.025326  481009 certs.go:385] copying /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.key.2307c246 -> /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.key
	I0819 19:58:45.025510  481009 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/proxy-client.key
	I0819 19:58:45.025692  481009 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 19:58:45.025729  481009 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 19:58:45.025744  481009 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 19:58:45.025770  481009 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:58:45.025792  481009 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:58:45.025816  481009 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 19:58:45.025886  481009 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:58:45.027090  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:58:45.061434  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:58:45.094381  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:58:45.133397  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 19:58:45.173530  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 19:58:45.206716  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:58:45.241999  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:58:45.274886  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:58:45.308813  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:58:45.354057  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 19:58:45.381606  481009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 19:58:45.410472  481009 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:58:45.428456  481009 ssh_runner.go:195] Run: openssl version
	I0819 19:58:45.435083  481009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:58:45.445993  481009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:58:45.451574  481009 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:58:45.451671  481009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:58:45.458486  481009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:58:45.470536  481009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 19:58:45.482076  481009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 19:58:45.487234  481009 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 19:58:45.487314  481009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 19:58:45.493755  481009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 19:58:45.505043  481009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 19:58:45.517675  481009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 19:58:45.524343  481009 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 19:58:45.524415  481009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 19:58:45.531883  481009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:58:45.542831  481009 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:58:45.548838  481009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:58:45.556398  481009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:58:45.562545  481009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:58:45.568814  481009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:58:45.574945  481009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:58:45.581335  481009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:58:45.587672  481009 kubeadm.go:392] StartCluster: {Name:running-upgrade-814149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running
-upgrade-814149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 19:58:45.587768  481009 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:58:45.587824  481009 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:58:45.619870  481009 cri.go:89] found id: "13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d"
	I0819 19:58:45.619895  481009 cri.go:89] found id: "8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3"
	I0819 19:58:45.619900  481009 cri.go:89] found id: "7862e0abf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5"
	I0819 19:58:45.619904  481009 cri.go:89] found id: "8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e"
	I0819 19:58:45.619908  481009 cri.go:89] found id: "a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7"
	I0819 19:58:45.619913  481009 cri.go:89] found id: "2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5"
	I0819 19:58:45.619917  481009 cri.go:89] found id: "686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973"
	I0819 19:58:45.619921  481009 cri.go:89] found id: "5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7"
	I0819 19:58:45.619926  481009 cri.go:89] found id: ""
	I0819 19:58:45.619983  481009 ssh_runner.go:195] Run: sudo runc list -f json
	I0819 19:58:45.651618  481009 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d","pid":2128,"status":"running","bundle":"/run/containers/storage/overlay-containers/13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d/userdata","rootfs":"/var/lib/containers/storage/overlay/747de1858df7669c0df8b6ae7b583907346ff010ee1c242ca0865f7c60e2bbfd/merged","created":"2024-08-19T19:58:32.340621772Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b2097f03","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b2097f03\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.te
rminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-19T19:58:32.036995856Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d225c6b3-f05c-4157-94f0-d78926d01235\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_d225c6b3-f05c-4157-94f0-d78926d01235/storage-provisioner/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provi
sioner\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/747de1858df7669c0df8b6ae7b583907346ff010ee1c242ca0865f7c60e2bbfd/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_d225c6b3-f05c-4157-94f0-d78926d01235_1","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_d225c6b3-f05c-4157-94f0-d78926d01235_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d225c6b3-f05c-4157-94f0
-d78926d01235/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d225c6b3-f05c-4157-94f0-d78926d01235/containers/storage-provisioner/032ea17c\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d225c6b3-f05c-4157-94f0-d78926d01235/volumes/kubernetes.io~projected/kube-api-access-kgf5m\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d225c6b3-f05c-4157-94f0-d78926d01235","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\"
:\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2024-08-19T19:58:28.728194527Z","kubernetes.io/config.source":"api","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5","pid":1088,"status":"running","bundle":"/run/containers/storage/overlay-containers/2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5/userdata","rootfs":"/var/lib/containers/storage/overlay/c8bad195526a65f3f1628f4feb928e9c8c35efd0afd12ad25394
32d3e2c944e9/merged","created":"2024-08-19T19:57:57.082911826Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c2b4c8cb","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c2b4c8cb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-19T19:57:57.010971681Z","io.kubernetes.cri-o.Image":"e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693","io.kubernetes.cri-o.Im
ageName":"k8s.gcr.io/kube-apiserver:v1.24.1","io.kubernetes.cri-o.ImageRef":"e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-running-upgrade-814149\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"68cd9fb77d32dc11dd8265589f1f254e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-running-upgrade-814149_68cd9fb77d32dc11dd8265589f1f254e/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c8bad195526a65f3f1628f4feb928e9c8c35efd0afd12ad2539432d3e2c944e9/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-running-upgrade-814149_kube-system_68cd9fb77d32dc11dd8265589f1f254e_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1
528e23487704b0efe1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-running-upgrade-814149_kube-system_68cd9fb77d32dc11dd8265589f1f254e_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/68cd9fb77d32dc11dd8265589f1f254e/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/68cd9fb77d32dc11dd8265589f1f254e/containers/kube-apiserver/39a60df5\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/c
erts\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-running-upgrade-814149","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"68cd9fb77d32dc11dd8265589f1f254e","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.238:8443","kubernetes.io/config.hash":"68cd9fb77d32dc11dd8265589f1f254e","kubernetes.io/config.seen":"2024-08-19T19:57:43.433635108Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7","pid":948,"status":"running","bundle":"/run/containers/storage/overlay-containers/5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7/userdata","roo
tfs":"/var/lib/containers/storage/overlay/62883e2625ad60465677e8171169a24766c8b59407f63953ee2eeb6eef68d2b2/merged","created":"2024-08-19T19:57:45.722805472Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"eff52b7d","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"eff52b7d\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-19T19:57:45.684026186Z","io.kubernetes.cri-o.Ima
ge":"18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.24.1","io.kubernetes.cri-o.ImageRef":"18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-running-upgrade-814149\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"61e3b0a6e8f83345f590745946a230a3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-running-upgrade-814149_61e3b0a6e8f83345f590745946a230a3/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/62883e2625ad60465677e8171169a24766c8b59407f63953ee2eeb6eef68d2b2/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-running-upgrade-814149_kube-system_61e3b0a6e8f83345f590745946a230a3_0","io.kubernetes.cri-o.ResolvPath":
"/var/run/containers/storage/overlay-containers/9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-running-upgrade-814149_kube-system_61e3b0a6e8f83345f590745946a230a3_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/61e3b0a6e8f83345f590745946a230a3/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/61e3b0a6e8f83345f590745946a230a3/containers/kube-scheduler/0c94193c\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"ku
be-scheduler-running-upgrade-814149","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"61e3b0a6e8f83345f590745946a230a3","kubernetes.io/config.hash":"61e3b0a6e8f83345f590745946a230a3","kubernetes.io/config.seen":"2024-08-19T19:57:43.433638577Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b","pid":1458,"status":"running","bundle":"/run/containers/storage/overlay-containers/5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b/userdata","rootfs":"/var/lib/containers/storage/overlay/62f5158087b4eeb2524b2552be8890bb391b9912f809a50e87ce6d7cca127ec9/merged","created":"2024-08-19T19:58:29.243218254Z","annotations":{"addonmanage
r.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":
\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\",\"kubernetes.io/config.seen\":\"2024-08-19T19:58:28.728194527Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-podd225c6b3_f05c_4157_94f0_d78926d01235.slice","io.kubernetes.cri-o.ContainerID":"5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_d225c6b3-f05c-4157-94f0-d78926d01235_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-19T19:58:29.110455571Z","io.kubernetes.cri-o.HostName":"running-upgrade-814149","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"storage-provisioner\
",\"integration-test\":\"storage-provisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.pod.uid\":\"d225c6b3-f05c-4157-94f0-d78926d01235\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_d225c6b3-f05c-4157-94f0-d78926d01235/5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"storage-provisioner\",\"UID\":\"d225c6b3-f05c-4157-94f0-d78926d01235\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/62f5158087b4eeb2524b2552be8890bb391b9912f809a50e87ce6d7cca127ec9/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_d225c6b3-f05c-4157-94f0-d78926d01235_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.Pri
vilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"d225c6b3-f05c-4157-94f0-d78926d01235","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{
\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2024-08-19T19:58:28.728194527Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973","pid":1021,"status":"running","bundle":"/run/containers/storage/overlay-containers/686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973/userdata","rootfs":"/var/lib/containers/storage/overlay/af264d40d4e180612dd1ecc6ca5f3cd642f97e74f29266df9a35739a8de63220/merged","created":"2024-08-19T19:57:56.072440943Z","annotations":{"io.container.manager":"cri-
o","io.kubernetes.container.hash":"1c682979","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1c682979\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-19T19:57:56.006160972Z","io.kubernetes.cri-o.Image":"b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.24.1","io.kubernetes.cri-o.ImageRef":"b4ea7e648530
d171b38f67305e22caf49f9d968d71c558e663709b805076538d","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-running-upgrade-814149\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cd3a71b3f87971114ebb42fa0c1c70bb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-running-upgrade-814149_cd3a71b3f87971114ebb42fa0c1c70bb/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/af264d40d4e180612dd1ecc6ca5f3cd642f97e74f29266df9a35739a8de63220/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-running-upgrade-814149_kube-system_cd3a71b3f87971114ebb42fa0c1c70bb_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a/userdat
a/resolv.conf","io.kubernetes.cri-o.SandboxID":"c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-running-upgrade-814149_kube-system_cd3a71b3f87971114ebb42fa0c1c70bb_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cd3a71b3f87971114ebb42fa0c1c70bb/containers/kube-controller-manager/6eb1d355\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cd3a71b3f87971114ebb42fa0c1c70bb/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true}
,{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-running-upgrade-814149","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cd3a71b3f87971114ebb42fa0c1c70bb","kubernetes.io/config.hash":"cd3a71b3f87971114ebb42fa0c1c70bb","kubernetes.io/config.seen":"2024-08-19T19:57:43.433637214Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7862e0a
bf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5","pid":1601,"status":"running","bundle":"/run/containers/storage/overlay-containers/7862e0abf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5/userdata","rootfs":"/var/lib/containers/storage/overlay/f1e271646e3f1d98222133cdf619de06257099b07b9f8aaf9cb13fb7d063c00a/merged","created":"2024-08-19T19:58:30.121593914Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d36c3c1c","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d36c3c1c\",\"io.kubernetes.container.ports\
":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"7862e0abf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-19T19:58:30.030231211Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri-o.ImageRef":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","io.kuberne
tes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-6d4b75cb6d-n6bjb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2c398c01-e3a8-4962-905b-8e22c52a6f6d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6d4b75cb6d-n6bjb_2c398c01-e3a8-4962-905b-8e22c52a6f6d/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f1e271646e3f1d98222133cdf619de06257099b07b9f8aaf9cb13fb7d063c00a/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-6d4b75cb6d-n6bjb_kube-system_2c398c01-e3a8-4962-905b-8e22c52a6f6d_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6","io.kubernetes.cri-o.SandboxName":"k8s_coredns-6d4b75cb6d-n6bjb_kube-
system_2c398c01-e3a8-4962-905b-8e22c52a6f6d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/2c398c01-e3a8-4962-905b-8e22c52a6f6d/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2c398c01-e3a8-4962-905b-8e22c52a6f6d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2c398c01-e3a8-4962-905b-8e22c52a6f6d/containers/coredns/aabcb85d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/2c398c01-e3a8-4962-905b-8e22c52a6f6d/volumes/kubernetes.io~projected/kube-api-access-d8v9w\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-6d4b75cb6d-n6bjb","io.kubernetes.pod.namespa
ce":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2c398c01-e3a8-4962-905b-8e22c52a6f6d","kubernetes.io/config.seen":"2024-08-19T19:58:28.720999136Z","kubernetes.io/config.source":"api","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754","pid":1131,"status":"running","bundle":"/run/containers/storage/overlay-containers/86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754/userdata","rootfs":"/var/lib/containers/storage/overlay/6df4f8f8d61978d5ef45fc33e0861fdc25c8e33441809bcbe666bd8f4b39127e/merged","created":"2024-08-19T19:57:58.636349563Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes
.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"e725c1cb4074b8cd283bfdc2d5a3bcbc\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.238:2379\",\"kubernetes.io/config.seen\":\"2024-08-19T19:57:43.433588549Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pode725c1cb4074b8cd283bfdc2d5a3bcbc.slice","io.kubernetes.cri-o.ContainerID":"86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-running-upgrade-814149_kube-system_e725c1cb4074b8cd283bfdc2d5a3bcbc_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-19T19:57:58.566293016Z","io.kubernetes.cri-o.HostName":"running-upgrade-814149","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"
etcd-running-upgrade-814149","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"e725c1cb4074b8cd283bfdc2d5a3bcbc\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-running-upgrade-814149\",\"tier\":\"control-plane\",\"component\":\"etcd\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-running-upgrade-814149_e725c1cb4074b8cd283bfdc2d5a3bcbc/86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"etcd-running-upgrade-814149\",\"UID\":\"e725c1cb4074b8cd283bfdc2d5a3bcbc\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6df4f8f8d61978d5ef45fc33e0861fdc25c8e33441809bcbe666bd8f4b39127e/merged","io.kubernetes.cri-o.Name":"k8s_etcd-running-upgrade-814149_kube-system_e725c1cb4074b8cd283bfdc2d5a3bcbc_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\
":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754/userdata/shm","io.kubernetes.pod.name":"etcd-running-upgrade-814149","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e725c1cb4074b8cd283bfdc2d5a3bcbc","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.238:2379","kubernetes.io/config.hash":"e725c1cb4074b8cd283bfdc2d5a3bcbc","kubernetes.io/config.seen":"2024-08-19T19:57:43.433588549Z","kubernetes.io/config.source":"file
","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e","pid":1531,"status":"running","bundle":"/run/containers/storage/overlay-containers/8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e/userdata","rootfs":"/var/lib/containers/storage/overlay/d65bb6f8a205a0256557aeb49d3972f1646be7a404ef0d98ad3ba636c0cd6e9d/merged","created":"2024-08-19T19:58:29.906538609Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"84df7c1c","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"84df7c1c\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-lo
g\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-19T19:58:29.647397936Z","io.kubernetes.cri-o.Image":"beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.24.1","io.kubernetes.cri-o.ImageRef":"beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-zlldb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"72574efa-cee8-4763-bf3d-424af3ae1c6c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-zlldb_72574efa-cee8-4763-bf3d-424af3ae1c6c/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io
.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d65bb6f8a205a0256557aeb49d3972f1646be7a404ef0d98ad3ba636c0cd6e9d/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-zlldb_kube-system_72574efa-cee8-4763-bf3d-424af3ae1c6c_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-zlldb_kube-system_72574efa-cee8-4763-bf3d-424af3ae1c6c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc
/hosts\",\"host_path\":\"/var/lib/kubelet/pods/72574efa-cee8-4763-bf3d-424af3ae1c6c/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/72574efa-cee8-4763-bf3d-424af3ae1c6c/containers/kube-proxy/ecc4073c\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/72574efa-cee8-4763-bf3d-424af3ae1c6c/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/72574efa-cee8-4763-bf3d-424af3ae1c6c/volumes/kubernetes.io~projected/kube-api-access-j5fq9\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-zlldb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"72574efa-cee8-4763-bf3d-424af3ae1c6c","kubernetes.io/config.seen":"2024-08-19T19:58:28.317453402Z","kubernetes.io/config.source":"api","org.systemd.property.After":"['cr
io.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3/userdata","rootfs":"/var/lib/containers/storage/overlay/c91e928118b193c0427669ec7fb0293846d75f165011a0da0f401fc1c20ecd77/merged","created":"2024-08-19T19:58:30.31801087Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b2097f03","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b2097f03\",\"io.kubernetes.co
ntainer.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-19T19:58:30.151267878Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d225c6b3-f05c-4157-94f0-d78926d01235\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-
provisioner_d225c6b3-f05c-4157-94f0-d78926d01235/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c91e928118b193c0427669ec7fb0293846d75f165011a0da0f401fc1c20ecd77/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_d225c6b3-f05c-4157-94f0-d78926d01235_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_d225c6b3-f05c-4157-94f0-d78926d01235_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp
\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d225c6b3-f05c-4157-94f0-d78926d01235/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d225c6b3-f05c-4157-94f0-d78926d01235/containers/storage-provisioner/781d401c\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d225c6b3-f05c-4157-94f0-d78926d01235/volumes/kubernetes.io~projected/kube-api-access-kgf5m\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d225c6b3-f05c-4157-94f0-d78926d01235","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-p
rovisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2024-08-19T19:58:28.728194527Z","kubernetes.io/config.source":"api","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7","pid":920,"status":"running","bundle":"/run/containers/storage/overlay-containers/9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0
fc68408f7/userdata","rootfs":"/var/lib/containers/storage/overlay/3142b172446dbd5d3e8e287c1f0f94f2f35c301f610c8eaa26352101226f77a7/merged","created":"2024-08-19T19:57:45.427297119Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"61e3b0a6e8f83345f590745946a230a3\",\"kubernetes.io/config.seen\":\"2024-08-19T19:57:43.433638577Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod61e3b0a6e8f83345f590745946a230a3.slice","io.kubernetes.cri-o.ContainerID":"9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-running-upgrade-814149_kube-system_61e3b0a6e8f83345f590745946a230a3_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-19T19:57:45.377086179Z","io.kubernetes.cri-o.HostName":"running-upgrade-814149","io.kubernetes.cri-o.Hos
tNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-running-upgrade-814149","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-scheduler-running-upgrade-814149\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"61e3b0a6e8f83345f590745946a230a3\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-running-upgrade-814149_61e3b0a6e8f83345f590745946a230a3/9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"kube-scheduler-running-upgrade-814149\",\"UID\":\"61e3b0a6e8f83345f590745946a230a3\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint
":"/var/lib/containers/storage/overlay/3142b172446dbd5d3e8e287c1f0f94f2f35c301f610c8eaa26352101226f77a7/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-running-upgrade-814149_kube-system_61e3b0a6e8f83345f590745946a230a3_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-running-upgrade-814149","io.ku
bernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"61e3b0a6e8f83345f590745946a230a3","kubernetes.io/config.hash":"61e3b0a6e8f83345f590745946a230a3","kubernetes.io/config.seen":"2024-08-19T19:57:43.433638577Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7","pid":1162,"status":"running","bundle":"/run/containers/storage/overlay-containers/a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7/userdata","rootfs":"/var/lib/containers/storage/overlay/4168dbdf4384c147d25af8a80b3ec191a1a89b3842fc58ad4730009178a75c5c/merged","created":"2024-08-19T19:57:58.950053047Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"841356c0","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kuber
netes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"841356c0\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-19T19:57:58.89698761Z","io.kubernetes.cri-o.Image":"aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri-o.ImageRef":"aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-running-upgrade-814149\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kuberne
tes.pod.uid\":\"e725c1cb4074b8cd283bfdc2d5a3bcbc\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-running-upgrade-814149_e725c1cb4074b8cd283bfdc2d5a3bcbc/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4168dbdf4384c147d25af8a80b3ec191a1a89b3842fc58ad4730009178a75c5c/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-running-upgrade-814149_kube-system_e725c1cb4074b8cd283bfdc2d5a3bcbc_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754","io.kubernetes.cri-o.SandboxName":"k8s_etcd-running-upgrade-814149_kube-system_e725c1cb4074b8cd283bfdc2d5a3bcbc_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.
cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e725c1cb4074b8cd283bfdc2d5a3bcbc/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e725c1cb4074b8cd283bfdc2d5a3bcbc/containers/etcd/ee4a419d\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-running-upgrade-814149","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e725c1cb4074b8cd283bfdc2d5a3bcbc","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.238:2379","kubernetes.io/config.hash":"e725c1cb4074b8cd283bfdc2d5a3bcbc","kubernetes.io/config.seen":"2024-08-19T19:57:43.433588549Z","kubernetes.io/config.source":"
file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6","pid":1485,"status":"running","bundle":"/run/containers/storage/overlay-containers/bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6/userdata","rootfs":"/var/lib/containers/storage/overlay/d21c7c22745a2ae18b471a67bb91e1406bacbdb95449f38a45bd1675b0f46f96/merged","created":"2024-08-19T19:58:29.360348145Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-08-19T19:58:28.720999136Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"72:d2:ef:75:fd:5c\"},{\"name\":\"veth2
8df23a7\",\"mac\":\"92:c2:c0:ab:5e:0d\"},{\"name\":\"eth0\",\"mac\":\"da:d2:9d:44:f4:36\",\"sandbox\":\"/var/run/netns/b91d5625-7a8d-4638-946c-cc93cb9fe609\"}],\"ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.2/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod2c398c01_e3a8_4962_905b_8e22c52a6f6d.slice","io.kubernetes.cri-o.ContainerID":"bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-6d4b75cb6d-n6bjb_kube-system_2c398c01-e3a8-4962-905b-8e22c52a6f6d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-19T19:58:29.080224201Z","io.kubernetes.cri-o.HostName":"coredns-6d4b75cb6d-n6bjb","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6/userdat
a/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-6d4b75cb6d-n6bjb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"2c398c01-e3a8-4962-905b-8e22c52a6f6d\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-6d4b75cb6d-n6bjb\",\"k8s-app\":\"kube-dns\",\"pod-template-hash\":\"6d4b75cb6d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6d4b75cb6d-n6bjb_2c398c01-e3a8-4962-905b-8e22c52a6f6d/bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"coredns-6d4b75cb6d-n6bjb\",\"UID\":\"2c398c01-e3a8-4962-905b-8e22c52a6f6d\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d21c7c22745a2ae18b471a67bb91e1406bacbdb95449f38a45bd1675b0f46f96/merged","io.kubernetes.cri-o.Name":"k8s_coredns-6d4b75cb6d-n6bjb_kube-system_2c398c01-e3a8-4962-905b-8e22c
52a6f6d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6/userdata/shm","io.kubernetes.pod.name":"coredns-6d4b75cb6d-n6bjb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"2c398c01-e3a8-4962-905b-8e22c52a6f6d","k8s-app":"kube-dns","kubernetes.io/config.seen":"2024-08-19T19:58:28.720999136Z","kubernetes.io/config.source":"api","org.systemd.property.Coll
ectMode":"'inactive-or-failed'","pod-template-hash":"6d4b75cb6d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a","pid":991,"status":"running","bundle":"/run/containers/storage/overlay-containers/c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a/userdata","rootfs":"/var/lib/containers/storage/overlay/61fe3381ef9f42f48a038466115de13d8a1a1370e99f2c5ffe5515b252647fab/merged","created":"2024-08-19T19:57:55.63514221Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-08-19T19:57:43.433637214Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"cd3a71b3f87971114ebb42fa0c1c70bb\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podcd3a71b3f87971114ebb42fa0c1c70bb.slice","io.kubernetes.cri-o.ContainerID":"c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f3
76a","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-running-upgrade-814149_kube-system_cd3a71b3f87971114ebb42fa0c1c70bb_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-19T19:57:55.573512869Z","io.kubernetes.cri-o.HostName":"running-upgrade-814149","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-running-upgrade-814149","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"cd3a71b3f87971114ebb42fa0c1c70bb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-running-upgrade-814149\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pod
s/kube-system_kube-controller-manager-running-upgrade-814149_cd3a71b3f87971114ebb42fa0c1c70bb/c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"kube-controller-manager-running-upgrade-814149\",\"UID\":\"cd3a71b3f87971114ebb42fa0c1c70bb\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/61fe3381ef9f42f48a038466115de13d8a1a1370e99f2c5ffe5515b252647fab/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-running-upgrade-814149_kube-system_cd3a71b3f87971114ebb42fa0c1c70bb_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"
","io.kubernetes.cri-o.SandboxID":"c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-running-upgrade-814149","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"cd3a71b3f87971114ebb42fa0c1c70bb","kubernetes.io/config.hash":"cd3a71b3f87971114ebb42fa0c1c70bb","kubernetes.io/config.seen":"2024-08-19T19:57:43.433637214Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88","pid":1409,"status":"running","bundle":"/run/containers/storage/overlay-containers/c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88/userdata","rootfs":"/var/
lib/containers/storage/overlay/8f58b2890809c3536175082342a637ddf60d90ca3602135a55a7edbc2909eddf/merged","created":"2024-08-19T19:58:29.027292854Z","annotations":{"controller-revision-hash":"58bf5dfbd7","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2024-08-19T19:58:28.317453402Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod72574efa_cee8_4763_bf3d_424af3ae1c6c.slice","io.kubernetes.cri-o.ContainerID":"c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-zlldb_kube-system_72574efa-cee8-4763-bf3d-424af3ae1c6c_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-19T19:58:28.956451829Z","io.kubernetes.cri-o.HostName":"running-upgrade-814149","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/c683
a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-zlldb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-zlldb\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"io.kubernetes.container.name\":\"POD\",\"controller-revision-hash\":\"58bf5dfbd7\",\"io.kubernetes.pod.uid\":\"72574efa-cee8-4763-bf3d-424af3ae1c6c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-zlldb_72574efa-cee8-4763-bf3d-424af3ae1c6c/c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"kube-proxy-zlldb\",\"UID\":\"72574efa-cee8-4763-bf3d-424af3ae1c6c\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8f58b2890809c3536175082342a637ddf60d90ca3602135a55a7edbc2909eddf/merged","io.kubernetes.cri-
o.Name":"k8s_kube-proxy-zlldb_kube-system_72574efa-cee8-4763-bf3d-424af3ae1c6c_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88/userdata/shm","io.kubernetes.pod.name":"kube-proxy-zlldb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"72574efa-cee8-4763-bf3d-424af3ae1c6c","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2024-08-19T19:58
:28.317453402Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1","pid":1041,"status":"running","bundle":"/run/containers/storage/overlay-containers/f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1/userdata","rootfs":"/var/lib/containers/storage/overlay/9928428c67fa58437780935cbf8307a9da0bccb6412a49de14185d6f27f2d036/merged","created":"2024-08-19T19:57:56.694886375Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-08-19T19:57:43.433635108Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"68cd9fb77d32dc11dd8265589f1f254e\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.39.238:8443\"}","io.kubernetes.cri-o.CgroupParent"
:"kubepods-burstable-pod68cd9fb77d32dc11dd8265589f1f254e.slice","io.kubernetes.cri-o.ContainerID":"f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-running-upgrade-814149_kube-system_68cd9fb77d32dc11dd8265589f1f254e_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-19T19:57:56.565866924Z","io.kubernetes.cri-o.HostName":"running-upgrade-814149","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-running-upgrade-814149","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"68cd9fb77d32dc11dd8265589f1f254e\",\"io.kubernetes.pod.namespace\":\"kube
-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-running-upgrade-814149\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-running-upgrade-814149_68cd9fb77d32dc11dd8265589f1f254e/f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"kube-apiserver-running-upgrade-814149\",\"UID\":\"68cd9fb77d32dc11dd8265589f1f254e\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9928428c67fa58437780935cbf8307a9da0bccb6412a49de14185d6f27f2d036/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-running-upgrade-814149_kube-system_68cd9fb77d32dc11dd8265589f1f254e_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f67f0a71460dceede0ce2edf05f8
e5869e8f39d74624b1528e23487704b0efe1/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-running-upgrade-814149","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"68cd9fb77d32dc11dd8265589f1f254e","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.238:8443","kubernetes.io/config.hash":"68cd9fb77d32dc11dd8265589f1f254e","kubernetes.io/config.seen":"2024-08-19T19:57:43.433635108Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0819 19:58:45.652323  481009 cri.go:126] list returned 15 containers
	I0819 19:58:45.652344  481009 cri.go:129] container: {ID:13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d Status:running}
	I0819 19:58:45.652364  481009 cri.go:135] skipping {13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d running}: state = "running", want "paused"
	I0819 19:58:45.652377  481009 cri.go:129] container: {ID:2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5 Status:running}
	I0819 19:58:45.652388  481009 cri.go:135] skipping {2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5 running}: state = "running", want "paused"
	I0819 19:58:45.652397  481009 cri.go:129] container: {ID:5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7 Status:running}
	I0819 19:58:45.652406  481009 cri.go:135] skipping {5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7 running}: state = "running", want "paused"
	I0819 19:58:45.652414  481009 cri.go:129] container: {ID:5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b Status:running}
	I0819 19:58:45.652423  481009 cri.go:131] skipping 5de95de84546564fd451ad3ff61b80e4becd73f0dac9c617103f242bdf28d23b - not in ps
	I0819 19:58:45.652430  481009 cri.go:129] container: {ID:686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973 Status:running}
	I0819 19:58:45.652442  481009 cri.go:135] skipping {686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973 running}: state = "running", want "paused"
	I0819 19:58:45.652451  481009 cri.go:129] container: {ID:7862e0abf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5 Status:running}
	I0819 19:58:45.652460  481009 cri.go:135] skipping {7862e0abf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5 running}: state = "running", want "paused"
	I0819 19:58:45.652469  481009 cri.go:129] container: {ID:86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754 Status:running}
	I0819 19:58:45.652475  481009 cri.go:131] skipping 86cf01f7c0794e84b3d4d9922cef498bbec499bdff1998ab12186d006f552754 - not in ps
	I0819 19:58:45.652481  481009 cri.go:129] container: {ID:8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e Status:running}
	I0819 19:58:45.652490  481009 cri.go:135] skipping {8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e running}: state = "running", want "paused"
	I0819 19:58:45.652500  481009 cri.go:129] container: {ID:8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3 Status:stopped}
	I0819 19:58:45.652512  481009 cri.go:135] skipping {8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3 stopped}: state = "stopped", want "paused"
	I0819 19:58:45.652518  481009 cri.go:129] container: {ID:9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7 Status:running}
	I0819 19:58:45.652526  481009 cri.go:131] skipping 9c80f4e98f44038130a5f0e5a8ef7d87f3e8905c26621658bce28e0fc68408f7 - not in ps
	I0819 19:58:45.652534  481009 cri.go:129] container: {ID:a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7 Status:running}
	I0819 19:58:45.652542  481009 cri.go:135] skipping {a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7 running}: state = "running", want "paused"
	I0819 19:58:45.652551  481009 cri.go:129] container: {ID:bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6 Status:running}
	I0819 19:58:45.652559  481009 cri.go:131] skipping bb321937af4aaa3fed1e1ae9026ef228683b21416c623ba83d1f2bc59e5c45c6 - not in ps
	I0819 19:58:45.652566  481009 cri.go:129] container: {ID:c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a Status:running}
	I0819 19:58:45.652571  481009 cri.go:131] skipping c106afab06c85568dfa956e712870ce49c9b34bdca085586d854d9af903f376a - not in ps
	I0819 19:58:45.652590  481009 cri.go:129] container: {ID:c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88 Status:running}
	I0819 19:58:45.652601  481009 cri.go:131] skipping c683a65a5a47e7c921fecb93046a2b33cc6042846eae2093338140d7ad89bf88 - not in ps
	I0819 19:58:45.652607  481009 cri.go:129] container: {ID:f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1 Status:running}
	I0819 19:58:45.652615  481009 cri.go:131] skipping f67f0a71460dceede0ce2edf05f8e5869e8f39d74624b1528e23487704b0efe1 - not in ps
	I0819 19:58:45.652672  481009 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0819 19:58:45.662436  481009 kubeadm.go:405] apiserver tunnel failed: apiserver port not set
	I0819 19:58:45.662468  481009 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:58:45.662475  481009 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:58:45.662532  481009 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:58:45.671153  481009 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:58:45.671805  481009 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-814149" does not appear in /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:58:45.672136  481009 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-430949/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-814149" cluster setting kubeconfig missing "running-upgrade-814149" context setting]
	I0819 19:58:45.672702  481009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/kubeconfig: {Name:mk1ae3aa741bb2460fc0d48f80867b991b4a0677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:45.673609  481009 kapi.go:59] client config for running-upgrade-814149: &rest.Config{Host:"https://192.168.39.238:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/client.key", CAFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 19:58:45.674294  481009 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:58:45.683735  481009 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/crio/crio.sock
	+  criSocket: unix:///var/run/crio/crio.sock
	   name: "running-upgrade-814149"
	   kubeletExtraArgs:
	     node-ip: 192.168.39.238
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0819 19:58:45.683772  481009 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:58:45.683790  481009 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:58:45.683852  481009 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:58:45.712316  481009 cri.go:89] found id: "13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d"
	I0819 19:58:45.712361  481009 cri.go:89] found id: "8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3"
	I0819 19:58:45.712369  481009 cri.go:89] found id: "7862e0abf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5"
	I0819 19:58:45.712374  481009 cri.go:89] found id: "8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e"
	I0819 19:58:45.712378  481009 cri.go:89] found id: "a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7"
	I0819 19:58:45.712383  481009 cri.go:89] found id: "2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5"
	I0819 19:58:45.712387  481009 cri.go:89] found id: "686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973"
	I0819 19:58:45.712392  481009 cri.go:89] found id: "5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7"
	I0819 19:58:45.712395  481009 cri.go:89] found id: ""
	I0819 19:58:45.712403  481009 cri.go:252] Stopping containers: [13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d 8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3 7862e0abf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5 8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7 2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5 686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973 5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7]
	I0819 19:58:45.712478  481009 ssh_runner.go:195] Run: which crictl
	I0819 19:58:45.716712  481009 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d 8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3 7862e0abf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5 8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7 2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5 686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973 5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7
	W0819 19:58:45.803175  481009 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d 8f9c3ed23c3cbe0378e1f4f9d7a29a19126a80d5d193caa0b420f30c3936d8b3 7862e0abf5deb68efd8438f6dff9a4ff3d3c74800170ad729076fdf2880030d5 8854d3356e9993ba913871e6fcc651fba6093a1867342f5151dce7d1d60fdd8e a3805f497436ab482597214a40674938902aa81443e0c2468cdd57975e85a7f7 2724e9f71a84965c3cd2efaff59994a8b66142eab6faa3e162c1a4539f2d1ff5 686f3b5b750769ead0b15f5c72cf6961b5783bf2fe76d01b050b024320451973 5a984ee632d6719a4db6df67d528abaec4c00c7ae6e47930c912a4e863e2c3e7: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-19T19:58:45Z" level=fatal msg="stopping the container \"13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d\": rpc error: code = Unknown desc = failed to unmount container 13cacb87941ebf0b295762c33399c8e4b23f18f41007364a15e9002d94341d2d: layer not known"
	I0819 19:58:45.803253  481009 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:58:45.839075  481009 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:58:45.849357  481009 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Aug 19 19:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Aug 19 19:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 19 19:58 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Aug 19 19:57 /etc/kubernetes/scheduler.conf
	
	I0819 19:58:45.849450  481009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I0819 19:58:45.858228  481009 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:58:45.858314  481009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:58:45.868251  481009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I0819 19:58:45.876567  481009 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:58:45.876650  481009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:58:45.885208  481009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I0819 19:58:45.893515  481009 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:58:45.893592  481009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:58:45.902495  481009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I0819 19:58:45.913302  481009 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:58:45.913382  481009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:58:45.934567  481009 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:58:45.942966  481009 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:58:46.058557  481009 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:58:46.885279  481009 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:58:47.279146  481009 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:58:47.362409  481009 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:58:47.483656  481009 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:58:47.483755  481009 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:58:47.522541  481009 api_server.go:72] duration metric: took 38.898531ms to wait for apiserver process to appear ...
	I0819 19:58:47.522633  481009 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:58:47.522669  481009 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0819 19:58:47.530893  481009 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I0819 19:58:47.539720  481009 api_server.go:141] control plane version: v1.24.1
	I0819 19:58:47.539757  481009 api_server.go:131] duration metric: took 17.104951ms to wait for apiserver health ...
	I0819 19:58:47.539770  481009 cni.go:84] Creating CNI manager for ""
	I0819 19:58:47.539780  481009 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:58:47.541993  481009 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:58:47.543470  481009 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:58:47.555204  481009 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:58:47.574873  481009 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:58:47.574978  481009 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 19:58:47.575010  481009 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 19:58:47.593525  481009 system_pods.go:59] 7 kube-system pods found
	I0819 19:58:47.593580  481009 system_pods.go:61] "coredns-6d4b75cb6d-n6bjb" [2c398c01-e3a8-4962-905b-8e22c52a6f6d] Running
	I0819 19:58:47.593590  481009 system_pods.go:61] "etcd-running-upgrade-814149" [619ee562-7fb0-4b5c-89aa-2b10d1050bd6] Running
	I0819 19:58:47.593596  481009 system_pods.go:61] "kube-apiserver-running-upgrade-814149" [16924e68-c47e-42ab-981c-9f6c64a35af6] Running
	I0819 19:58:47.593604  481009 system_pods.go:61] "kube-controller-manager-running-upgrade-814149" [c84e3c23-4505-452b-82bb-027c958dad19] Running
	I0819 19:58:47.593611  481009 system_pods.go:61] "kube-proxy-zlldb" [72574efa-cee8-4763-bf3d-424af3ae1c6c] Running
	I0819 19:58:47.593617  481009 system_pods.go:61] "kube-scheduler-running-upgrade-814149" [faeb4f9e-7ed2-465f-a755-9e820342a1c0] Running
	I0819 19:58:47.593622  481009 system_pods.go:61] "storage-provisioner" [d225c6b3-f05c-4157-94f0-d78926d01235] Running
	I0819 19:58:47.593634  481009 system_pods.go:74] duration metric: took 18.733761ms to wait for pod list to return data ...
	I0819 19:58:47.593646  481009 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:58:47.597456  481009 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0819 19:58:47.597506  481009 node_conditions.go:123] node cpu capacity is 2
	I0819 19:58:47.597524  481009 node_conditions.go:105] duration metric: took 3.868635ms to run NodePressure ...
	I0819 19:58:47.597552  481009 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:58:47.930040  481009 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:58:47.947013  481009 ops.go:34] apiserver oom_adj: -16
	I0819 19:58:47.947052  481009 kubeadm.go:597] duration metric: took 2.284569376s to restartPrimaryControlPlane
	I0819 19:58:47.947066  481009 kubeadm.go:394] duration metric: took 2.359404698s to StartCluster
	I0819 19:58:47.947121  481009 settings.go:142] acquiring lock: {Name:mkc71def6be9966e5f008a22161bf3ed26f482b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:47.947241  481009 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:58:47.948464  481009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-430949/kubeconfig: {Name:mk1ae3aa741bb2460fc0d48f80867b991b4a0677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:47.948749  481009 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:58:47.948909  481009 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:58:47.948976  481009 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-814149"
	I0819 19:58:47.948999  481009 config.go:182] Loaded profile config "running-upgrade-814149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0819 19:58:47.949010  481009 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-814149"
	I0819 19:58:47.949031  481009 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-814149"
	I0819 19:58:47.949005  481009 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-814149"
	W0819 19:58:47.949054  481009 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:58:47.949079  481009 host.go:66] Checking if "running-upgrade-814149" exists ...
	I0819 19:58:47.949428  481009 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:47.949456  481009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:47.949459  481009 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:47.949474  481009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:47.951337  481009 out.go:177] * Verifying Kubernetes components...
	I0819 19:58:47.952747  481009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:58:47.967427  481009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34079
	I0819 19:58:47.968034  481009 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:47.968635  481009 main.go:141] libmachine: Using API Version  1
	I0819 19:58:47.968661  481009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:47.969238  481009 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:47.969507  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetState
	I0819 19:58:47.970707  481009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38211
	I0819 19:58:47.971264  481009 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:47.971947  481009 main.go:141] libmachine: Using API Version  1
	I0819 19:58:47.971969  481009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:47.972397  481009 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:47.972720  481009 kapi.go:59] client config for running-upgrade-814149: &rest.Config{Host:"https://192.168.39.238:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/profiles/running-upgrade-814149/client.key", CAFile:"/home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 19:58:47.972970  481009 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:47.972994  481009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:47.973027  481009 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-814149"
	W0819 19:58:47.973040  481009 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:58:47.973267  481009 host.go:66] Checking if "running-upgrade-814149" exists ...
	I0819 19:58:47.973634  481009 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:47.973661  481009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:47.994215  481009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38689
	I0819 19:58:47.995060  481009 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:47.995740  481009 main.go:141] libmachine: Using API Version  1
	I0819 19:58:47.995765  481009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:47.996164  481009 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:47.996753  481009 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-430949/.minikube/bin/docker-machine-driver-kvm2
	I0819 19:58:47.996778  481009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:58:48.009333  481009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37381
	I0819 19:58:48.010583  481009 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:58:48.011303  481009 main.go:141] libmachine: Using API Version  1
	I0819 19:58:48.011332  481009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:58:48.011821  481009 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:58:48.012078  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .GetState
	I0819 19:58:48.014193  481009 main.go:141] libmachine: (running-upgrade-814149) Calling .DriverName
	I0819 19:58:48.016173  481009 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:58:43.172809  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:40:8f:2e in network mk-NoKubernetes-803941
	I0819 19:58:43.173377  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | unable to find current IP address of domain NoKubernetes-803941 in network mk-NoKubernetes-803941
	I0819 19:58:43.173423  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:43.173340  481419 retry.go:31] will retry after 340.952687ms: waiting for machine to come up
	I0819 19:58:43.516109  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:40:8f:2e in network mk-NoKubernetes-803941
	I0819 19:58:43.516721  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | unable to find current IP address of domain NoKubernetes-803941 in network mk-NoKubernetes-803941
	I0819 19:58:43.516740  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:43.516680  481419 retry.go:31] will retry after 431.043253ms: waiting for machine to come up
	I0819 19:58:43.949254  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:40:8f:2e in network mk-NoKubernetes-803941
	I0819 19:58:43.949817  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | unable to find current IP address of domain NoKubernetes-803941 in network mk-NoKubernetes-803941
	I0819 19:58:43.949836  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:43.949720  481419 retry.go:31] will retry after 467.702895ms: waiting for machine to come up
	I0819 19:58:44.419528  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:40:8f:2e in network mk-NoKubernetes-803941
	I0819 19:58:44.420236  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | unable to find current IP address of domain NoKubernetes-803941 in network mk-NoKubernetes-803941
	I0819 19:58:44.420270  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:44.420146  481419 retry.go:31] will retry after 735.974424ms: waiting for machine to come up
	I0819 19:58:45.158487  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:40:8f:2e in network mk-NoKubernetes-803941
	I0819 19:58:45.159346  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | unable to find current IP address of domain NoKubernetes-803941 in network mk-NoKubernetes-803941
	I0819 19:58:45.159367  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:45.159237  481419 retry.go:31] will retry after 939.601782ms: waiting for machine to come up
	I0819 19:58:46.101040  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:40:8f:2e in network mk-NoKubernetes-803941
	I0819 19:58:46.101620  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | unable to find current IP address of domain NoKubernetes-803941 in network mk-NoKubernetes-803941
	I0819 19:58:46.101645  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:46.101573  481419 retry.go:31] will retry after 988.707631ms: waiting for machine to come up
	I0819 19:58:47.092271  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | domain NoKubernetes-803941 has defined MAC address 52:54:00:40:8f:2e in network mk-NoKubernetes-803941
	I0819 19:58:47.092797  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | unable to find current IP address of domain NoKubernetes-803941 in network mk-NoKubernetes-803941
	I0819 19:58:47.092817  481365 main.go:141] libmachine: (NoKubernetes-803941) DBG | I0819 19:58:47.092733  481419 retry.go:31] will retry after 1.289747968s: waiting for machine to come up
	I0819 19:58:43.466333  481208 cni.go:84] Creating CNI manager for ""
	I0819 19:58:43.466366  481208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:58:43.466378  481208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:58:43.466409  481208 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-232147 NodeName:pause-232147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:58:43.466606  481208 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-232147"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:58:43.466692  481208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:58:43.480800  481208 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:58:43.480943  481208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:58:43.494098  481208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0819 19:58:43.518868  481208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:58:43.550730  481208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0819 19:58:43.575823  481208 ssh_runner.go:195] Run: grep 192.168.50.125	control-plane.minikube.internal$ /etc/hosts
	I0819 19:58:43.582039  481208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:58:43.767732  481208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:58:43.821156  481208 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147 for IP: 192.168.50.125
	I0819 19:58:43.821187  481208 certs.go:194] generating shared ca certs ...
	I0819 19:58:43.821211  481208 certs.go:226] acquiring lock for ca certs: {Name:mk1a299d1a77a4c0ee8c5c97373d2da3d35c8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:58:43.821396  481208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key
	I0819 19:58:43.821450  481208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key
	I0819 19:58:43.821467  481208 certs.go:256] generating profile certs ...
	I0819 19:58:43.821620  481208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/client.key
	I0819 19:58:43.821705  481208 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/apiserver.key.bef1e027
	I0819 19:58:43.821761  481208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/proxy-client.key
	I0819 19:58:43.821912  481208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem (1338 bytes)
	W0819 19:58:43.821949  481208 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159_empty.pem, impossibly tiny 0 bytes
	I0819 19:58:43.821958  481208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 19:58:43.821988  481208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:58:43.822021  481208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:58:43.822045  481208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/certs/key.pem (1675 bytes)
	I0819 19:58:43.822096  481208 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem (1708 bytes)
	I0819 19:58:43.823008  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:58:44.086925  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:58:44.262859  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:58:44.445056  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 19:58:44.554734  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 19:58:44.659085  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:58:44.730751  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:58:44.769563  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/pause-232147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 19:58:44.811174  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/certs/438159.pem --> /usr/share/ca-certificates/438159.pem (1338 bytes)
	I0819 19:58:44.845022  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/ssl/certs/4381592.pem --> /usr/share/ca-certificates/4381592.pem (1708 bytes)
	I0819 19:58:44.888275  481208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-430949/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:58:44.931824  481208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:58:45.018160  481208 ssh_runner.go:195] Run: openssl version
	I0819 19:58:45.028573  481208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4381592.pem && ln -fs /usr/share/ca-certificates/4381592.pem /etc/ssl/certs/4381592.pem"
	I0819 19:58:45.044017  481208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4381592.pem
	I0819 19:58:45.052240  481208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:49 /usr/share/ca-certificates/4381592.pem
	I0819 19:58:45.052330  481208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4381592.pem
	I0819 19:58:45.061553  481208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4381592.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:58:45.076326  481208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:58:45.096827  481208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:58:45.103822  481208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:58:45.104009  481208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:58:45.112205  481208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:58:45.124865  481208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/438159.pem && ln -fs /usr/share/ca-certificates/438159.pem /etc/ssl/certs/438159.pem"
	I0819 19:58:45.140513  481208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/438159.pem
	I0819 19:58:45.146811  481208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:49 /usr/share/ca-certificates/438159.pem
	I0819 19:58:45.146908  481208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/438159.pem
	I0819 19:58:45.154991  481208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/438159.pem /etc/ssl/certs/51391683.0"
	I0819 19:58:45.174607  481208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:58:45.180731  481208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:58:45.188748  481208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:58:45.196496  481208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:58:45.204894  481208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:58:45.216659  481208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:58:45.225302  481208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:58:45.237522  481208 kubeadm.go:392] StartCluster: {Name:pause-232147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:pause-232147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:58:45.237752  481208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:58:45.237831  481208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:58:45.312153  481208 cri.go:89] found id: "ca316e974a78cefb52e09931ddf40c366eeaf95d3e88f530720554bdce23601f"
	I0819 19:58:45.312251  481208 cri.go:89] found id: "97e775e2fa7659e0e7ef22311ebe63cdb630f6e9688a1c52957d409ff88c9606"
	I0819 19:58:45.312273  481208 cri.go:89] found id: "8de8790b20b2cd7839093a8c87b5f67212ad9f970d60b7768d50e60d935439ae"
	I0819 19:58:45.312302  481208 cri.go:89] found id: "a21d2be3b0654f535a0b85beab522a910e60da1332a4ca91a69518797c176bc3"
	I0819 19:58:45.312332  481208 cri.go:89] found id: "bd77b49b539b66437d83a597edae7053b4aa97458faf8bdca5e1291db42c82a2"
	I0819 19:58:45.312347  481208 cri.go:89] found id: "87de891f8b0c8d70cd1ed1e52f9bb91ad789661b868c82d342085e78f45e8af0"
	I0819 19:58:45.312367  481208 cri.go:89] found id: "ba564d4d374b6de35552277a9f888a707e3fcc74a84da8bf6e8a43763dbe7a5c"
	I0819 19:58:45.312408  481208 cri.go:89] found id: "c362bfb09b902727dca16cc486a92f740411447ccf8a54937f1a2ce6b4861b94"
	I0819 19:58:45.312435  481208 cri.go:89] found id: ""
	I0819 19:58:45.312531  481208 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.644406671Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097551644380654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c71853ce-ace5-44a6-93bb-131761b20ed7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.645111367Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e49edc1-fa71-4ec7-9e73-ca3908e73c95 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.645162605Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e49edc1-fa71-4ec7-9e73-ca3908e73c95 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.645407721Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:520f27ed06d74c8d72a71cb6b710931be15d57349dc84d6addf3f17e5d45e1e4,PodSandboxId:685113e39e23351b2813c2da865a0acc2ad8a24f0ccd0d49ce497732bef321a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724097532179265307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ztskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4fa0745-fdef-4780-9b98-0a777d4cec90,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd064504364d8ab84157d5097535d715c95c99f52b5cfca602136ec82be83677,PodSandboxId:6cc030921c7c40908c8e724ece7c6bd2c1e42ce6112ab3e4476dc3524cd3aa93,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724097532172244929,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvnqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eda2399-7ef3-451f-9274-b94af2f13767,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c7b8274ac3fc5960f8c620b1c947e80091b2742eb2b7755dceec1f2fdca7cb,PodSandboxId:7976e34be27ac12f9e15b0fffb9fbd20f1cab658eee55d923564382af2863505,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724097528426480711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b76749c578fb5d3076d28d72e7894da,},Annota
tions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e94efaa503d71f37142cc657a5c12d83a68eaf26b8f8a39209f14d17e7aede30,PodSandboxId:549574688b704b3fe44d6ecbc72fb4b4df5c44bc3174b2917a6a052a1e92daee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724097528436927820,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a42810500f7482545a5ee02b62bb4b8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36fe82de3b13311fd356d124ae973b46707e6e3b3adaa19a549738097deb26f,PodSandboxId:e7b297af9dc53849969bea95857c2d8b237073ba371897d2c04f0ced072cd2c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724097528472798822,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912a77441dc947e6f62b6e1b3fa8984,},Annotations:map[string]string{io.kubernete
s.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36c09ba25a9801ff169f19647ad1fbdd22aaf654ac030c1f93a4b0c9a999e453,PodSandboxId:96b1c1881ad230a3ab5e8324c15be58a489425460f4eb9d0cbd1525cce632cd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724097528399274745,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a00b303ba8304647a0e2924005627c5,},Annotations:map[string]string{io.
kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca316e974a78cefb52e09931ddf40c366eeaf95d3e88f530720554bdce23601f,PodSandboxId:96b1c1881ad230a3ab5e8324c15be58a489425460f4eb9d0cbd1525cce632cd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724097524336094571,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a00b303ba8304647a0e2924005627c5,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e775e2fa7659e0e7ef22311ebe63cdb630f6e9688a1c52957d409ff88c9606,PodSandboxId:549574688b704b3fe44d6ecbc72fb4b4df5c44bc3174b2917a6a052a1e92daee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724097524215098167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a42810500f7482545a5ee02b62bb4b8,},Annotations:map[string]string{io.kubernetes.
container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8de8790b20b2cd7839093a8c87b5f67212ad9f970d60b7768d50e60d935439ae,PodSandboxId:bc0526a6cc5b66c7d725af68cfc1142c2a1690930a8d29dcafc0a6c2e9bbf94c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724097522549800950,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvnqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eda2399-7ef3-451f-9274-b94af2f13767,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kube
rnetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21d2be3b0654f535a0b85beab522a910e60da1332a4ca91a69518797c176bc3,PodSandboxId:04773d646a4805fc307be3d4962a0d69eece9dbf938341bfb48e7dca69c8e352,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724097522440289922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912a77441dc947e6f62b6e1b3fa8984,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87de891f8b0c8d70cd1ed1e52f9bb91ad789661b868c82d342085e78f45e8af0,PodSandboxId:97a2ccc19121b3e6cc856b4bfe704b0d64118bc96b8cd2ef34b1d73b8c6a5bee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724097521943106080,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ztskd,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4fa0745-fdef-4780-9b98-0a777d4cec90,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd77b49b539b66437d83a597edae7053b4aa97458faf8bdca5e1291db42c82a2,PodSandboxId:61b76d0d2f4f618dd211ef9c20819370ed398a5375a0c008b71f6312db73890e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724097521952123562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0b76749c578fb5d3076d28d72e7894da,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e49edc1-fa71-4ec7-9e73-ca3908e73c95 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.688659477Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=052402d4-c5ae-4318-9eb4-882bf086a192 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.688750514Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=052402d4-c5ae-4318-9eb4-882bf086a192 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.690110367Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5d781de2-d815-421c-bcfc-4ddca7bbf3ba name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.690750840Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097551690724903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5d781de2-d815-421c-bcfc-4ddca7bbf3ba name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.691515203Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f587bef-7429-4af2-ad02-3423df0c3e94 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.691584147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f587bef-7429-4af2-ad02-3423df0c3e94 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.691915428Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:520f27ed06d74c8d72a71cb6b710931be15d57349dc84d6addf3f17e5d45e1e4,PodSandboxId:685113e39e23351b2813c2da865a0acc2ad8a24f0ccd0d49ce497732bef321a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724097532179265307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ztskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4fa0745-fdef-4780-9b98-0a777d4cec90,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd064504364d8ab84157d5097535d715c95c99f52b5cfca602136ec82be83677,PodSandboxId:6cc030921c7c40908c8e724ece7c6bd2c1e42ce6112ab3e4476dc3524cd3aa93,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724097532172244929,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvnqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eda2399-7ef3-451f-9274-b94af2f13767,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c7b8274ac3fc5960f8c620b1c947e80091b2742eb2b7755dceec1f2fdca7cb,PodSandboxId:7976e34be27ac12f9e15b0fffb9fbd20f1cab658eee55d923564382af2863505,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724097528426480711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b76749c578fb5d3076d28d72e7894da,},Annota
tions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e94efaa503d71f37142cc657a5c12d83a68eaf26b8f8a39209f14d17e7aede30,PodSandboxId:549574688b704b3fe44d6ecbc72fb4b4df5c44bc3174b2917a6a052a1e92daee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724097528436927820,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a42810500f7482545a5ee02b62bb4b8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36fe82de3b13311fd356d124ae973b46707e6e3b3adaa19a549738097deb26f,PodSandboxId:e7b297af9dc53849969bea95857c2d8b237073ba371897d2c04f0ced072cd2c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724097528472798822,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912a77441dc947e6f62b6e1b3fa8984,},Annotations:map[string]string{io.kubernete
s.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36c09ba25a9801ff169f19647ad1fbdd22aaf654ac030c1f93a4b0c9a999e453,PodSandboxId:96b1c1881ad230a3ab5e8324c15be58a489425460f4eb9d0cbd1525cce632cd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724097528399274745,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a00b303ba8304647a0e2924005627c5,},Annotations:map[string]string{io.
kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca316e974a78cefb52e09931ddf40c366eeaf95d3e88f530720554bdce23601f,PodSandboxId:96b1c1881ad230a3ab5e8324c15be58a489425460f4eb9d0cbd1525cce632cd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724097524336094571,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a00b303ba8304647a0e2924005627c5,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e775e2fa7659e0e7ef22311ebe63cdb630f6e9688a1c52957d409ff88c9606,PodSandboxId:549574688b704b3fe44d6ecbc72fb4b4df5c44bc3174b2917a6a052a1e92daee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724097524215098167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a42810500f7482545a5ee02b62bb4b8,},Annotations:map[string]string{io.kubernetes.
container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8de8790b20b2cd7839093a8c87b5f67212ad9f970d60b7768d50e60d935439ae,PodSandboxId:bc0526a6cc5b66c7d725af68cfc1142c2a1690930a8d29dcafc0a6c2e9bbf94c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724097522549800950,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvnqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eda2399-7ef3-451f-9274-b94af2f13767,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kube
rnetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21d2be3b0654f535a0b85beab522a910e60da1332a4ca91a69518797c176bc3,PodSandboxId:04773d646a4805fc307be3d4962a0d69eece9dbf938341bfb48e7dca69c8e352,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724097522440289922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912a77441dc947e6f62b6e1b3fa8984,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87de891f8b0c8d70cd1ed1e52f9bb91ad789661b868c82d342085e78f45e8af0,PodSandboxId:97a2ccc19121b3e6cc856b4bfe704b0d64118bc96b8cd2ef34b1d73b8c6a5bee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724097521943106080,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ztskd,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4fa0745-fdef-4780-9b98-0a777d4cec90,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd77b49b539b66437d83a597edae7053b4aa97458faf8bdca5e1291db42c82a2,PodSandboxId:61b76d0d2f4f618dd211ef9c20819370ed398a5375a0c008b71f6312db73890e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724097521952123562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0b76749c578fb5d3076d28d72e7894da,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f587bef-7429-4af2-ad02-3423df0c3e94 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.733203729Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d2a82873-7ac3-48e2-9b56-5df66ccf4799 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.733278288Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d2a82873-7ac3-48e2-9b56-5df66ccf4799 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.737259384Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df8fa827-ecda-484b-b241-4757c77e809a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.737638971Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097551737612077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df8fa827-ecda-484b-b241-4757c77e809a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.738280799Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3270e9de-3cc6-4c19-ae10-5e4e949899d4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.738354259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3270e9de-3cc6-4c19-ae10-5e4e949899d4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.738603306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:520f27ed06d74c8d72a71cb6b710931be15d57349dc84d6addf3f17e5d45e1e4,PodSandboxId:685113e39e23351b2813c2da865a0acc2ad8a24f0ccd0d49ce497732bef321a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724097532179265307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ztskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4fa0745-fdef-4780-9b98-0a777d4cec90,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd064504364d8ab84157d5097535d715c95c99f52b5cfca602136ec82be83677,PodSandboxId:6cc030921c7c40908c8e724ece7c6bd2c1e42ce6112ab3e4476dc3524cd3aa93,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724097532172244929,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvnqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eda2399-7ef3-451f-9274-b94af2f13767,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c7b8274ac3fc5960f8c620b1c947e80091b2742eb2b7755dceec1f2fdca7cb,PodSandboxId:7976e34be27ac12f9e15b0fffb9fbd20f1cab658eee55d923564382af2863505,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724097528426480711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b76749c578fb5d3076d28d72e7894da,},Annota
tions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e94efaa503d71f37142cc657a5c12d83a68eaf26b8f8a39209f14d17e7aede30,PodSandboxId:549574688b704b3fe44d6ecbc72fb4b4df5c44bc3174b2917a6a052a1e92daee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724097528436927820,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a42810500f7482545a5ee02b62bb4b8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36fe82de3b13311fd356d124ae973b46707e6e3b3adaa19a549738097deb26f,PodSandboxId:e7b297af9dc53849969bea95857c2d8b237073ba371897d2c04f0ced072cd2c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724097528472798822,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912a77441dc947e6f62b6e1b3fa8984,},Annotations:map[string]string{io.kubernete
s.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36c09ba25a9801ff169f19647ad1fbdd22aaf654ac030c1f93a4b0c9a999e453,PodSandboxId:96b1c1881ad230a3ab5e8324c15be58a489425460f4eb9d0cbd1525cce632cd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724097528399274745,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a00b303ba8304647a0e2924005627c5,},Annotations:map[string]string{io.
kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca316e974a78cefb52e09931ddf40c366eeaf95d3e88f530720554bdce23601f,PodSandboxId:96b1c1881ad230a3ab5e8324c15be58a489425460f4eb9d0cbd1525cce632cd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724097524336094571,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a00b303ba8304647a0e2924005627c5,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e775e2fa7659e0e7ef22311ebe63cdb630f6e9688a1c52957d409ff88c9606,PodSandboxId:549574688b704b3fe44d6ecbc72fb4b4df5c44bc3174b2917a6a052a1e92daee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724097524215098167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a42810500f7482545a5ee02b62bb4b8,},Annotations:map[string]string{io.kubernetes.
container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8de8790b20b2cd7839093a8c87b5f67212ad9f970d60b7768d50e60d935439ae,PodSandboxId:bc0526a6cc5b66c7d725af68cfc1142c2a1690930a8d29dcafc0a6c2e9bbf94c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724097522549800950,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvnqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eda2399-7ef3-451f-9274-b94af2f13767,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kube
rnetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21d2be3b0654f535a0b85beab522a910e60da1332a4ca91a69518797c176bc3,PodSandboxId:04773d646a4805fc307be3d4962a0d69eece9dbf938341bfb48e7dca69c8e352,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724097522440289922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912a77441dc947e6f62b6e1b3fa8984,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87de891f8b0c8d70cd1ed1e52f9bb91ad789661b868c82d342085e78f45e8af0,PodSandboxId:97a2ccc19121b3e6cc856b4bfe704b0d64118bc96b8cd2ef34b1d73b8c6a5bee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724097521943106080,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ztskd,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4fa0745-fdef-4780-9b98-0a777d4cec90,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd77b49b539b66437d83a597edae7053b4aa97458faf8bdca5e1291db42c82a2,PodSandboxId:61b76d0d2f4f618dd211ef9c20819370ed398a5375a0c008b71f6312db73890e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724097521952123562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0b76749c578fb5d3076d28d72e7894da,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3270e9de-3cc6-4c19-ae10-5e4e949899d4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.779530174Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8626a3e-6151-40d9-9b82-e17c32b1d801 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.779736889Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8626a3e-6151-40d9-9b82-e17c32b1d801 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.780969651Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c61b2cc7-e08c-4057-9c11-a541cdffd439 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.781512669Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097551781482457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c61b2cc7-e08c-4057-9c11-a541cdffd439 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.782275730Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c58f1388-82f8-4536-baf4-a3d0c9049cb2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.782350380Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c58f1388-82f8-4536-baf4-a3d0c9049cb2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:59:11 pause-232147 crio[2642]: time="2024-08-19 19:59:11.782617367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:520f27ed06d74c8d72a71cb6b710931be15d57349dc84d6addf3f17e5d45e1e4,PodSandboxId:685113e39e23351b2813c2da865a0acc2ad8a24f0ccd0d49ce497732bef321a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724097532179265307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ztskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4fa0745-fdef-4780-9b98-0a777d4cec90,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd064504364d8ab84157d5097535d715c95c99f52b5cfca602136ec82be83677,PodSandboxId:6cc030921c7c40908c8e724ece7c6bd2c1e42ce6112ab3e4476dc3524cd3aa93,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724097532172244929,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvnqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eda2399-7ef3-451f-9274-b94af2f13767,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c7b8274ac3fc5960f8c620b1c947e80091b2742eb2b7755dceec1f2fdca7cb,PodSandboxId:7976e34be27ac12f9e15b0fffb9fbd20f1cab658eee55d923564382af2863505,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724097528426480711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b76749c578fb5d3076d28d72e7894da,},Annota
tions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e94efaa503d71f37142cc657a5c12d83a68eaf26b8f8a39209f14d17e7aede30,PodSandboxId:549574688b704b3fe44d6ecbc72fb4b4df5c44bc3174b2917a6a052a1e92daee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724097528436927820,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a42810500f7482545a5ee02b62bb4b8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36fe82de3b13311fd356d124ae973b46707e6e3b3adaa19a549738097deb26f,PodSandboxId:e7b297af9dc53849969bea95857c2d8b237073ba371897d2c04f0ced072cd2c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724097528472798822,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912a77441dc947e6f62b6e1b3fa8984,},Annotations:map[string]string{io.kubernete
s.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36c09ba25a9801ff169f19647ad1fbdd22aaf654ac030c1f93a4b0c9a999e453,PodSandboxId:96b1c1881ad230a3ab5e8324c15be58a489425460f4eb9d0cbd1525cce632cd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724097528399274745,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a00b303ba8304647a0e2924005627c5,},Annotations:map[string]string{io.
kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca316e974a78cefb52e09931ddf40c366eeaf95d3e88f530720554bdce23601f,PodSandboxId:96b1c1881ad230a3ab5e8324c15be58a489425460f4eb9d0cbd1525cce632cd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724097524336094571,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a00b303ba8304647a0e2924005627c5,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e775e2fa7659e0e7ef22311ebe63cdb630f6e9688a1c52957d409ff88c9606,PodSandboxId:549574688b704b3fe44d6ecbc72fb4b4df5c44bc3174b2917a6a052a1e92daee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724097524215098167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a42810500f7482545a5ee02b62bb4b8,},Annotations:map[string]string{io.kubernetes.
container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8de8790b20b2cd7839093a8c87b5f67212ad9f970d60b7768d50e60d935439ae,PodSandboxId:bc0526a6cc5b66c7d725af68cfc1142c2a1690930a8d29dcafc0a6c2e9bbf94c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724097522549800950,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvnqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eda2399-7ef3-451f-9274-b94af2f13767,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kube
rnetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21d2be3b0654f535a0b85beab522a910e60da1332a4ca91a69518797c176bc3,PodSandboxId:04773d646a4805fc307be3d4962a0d69eece9dbf938341bfb48e7dca69c8e352,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724097522440289922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912a77441dc947e6f62b6e1b3fa8984,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87de891f8b0c8d70cd1ed1e52f9bb91ad789661b868c82d342085e78f45e8af0,PodSandboxId:97a2ccc19121b3e6cc856b4bfe704b0d64118bc96b8cd2ef34b1d73b8c6a5bee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724097521943106080,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ztskd,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4fa0745-fdef-4780-9b98-0a777d4cec90,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd77b49b539b66437d83a597edae7053b4aa97458faf8bdca5e1291db42c82a2,PodSandboxId:61b76d0d2f4f618dd211ef9c20819370ed398a5375a0c008b71f6312db73890e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724097521952123562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-232147,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0b76749c578fb5d3076d28d72e7894da,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c58f1388-82f8-4536-baf4-a3d0c9049cb2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	520f27ed06d74       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   19 seconds ago      Running             kube-proxy                2                   685113e39e233       kube-proxy-ztskd
	fd064504364d8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago      Running             coredns                   2                   6cc030921c7c4       coredns-6f6b679f8f-gvnqf
	d36fe82de3b13       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   23 seconds ago      Running             kube-apiserver            2                   e7b297af9dc53       kube-apiserver-pause-232147
	e94efaa503d71       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   23 seconds ago      Running             kube-scheduler            2                   549574688b704       kube-scheduler-pause-232147
	a5c7b8274ac3f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   23 seconds ago      Running             etcd                      2                   7976e34be27ac       etcd-pause-232147
	36c09ba25a980       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   23 seconds ago      Running             kube-controller-manager   2                   96b1c1881ad23       kube-controller-manager-pause-232147
	ca316e974a78c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   27 seconds ago      Exited              kube-controller-manager   1                   96b1c1881ad23       kube-controller-manager-pause-232147
	97e775e2fa765       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   27 seconds ago      Exited              kube-scheduler            1                   549574688b704       kube-scheduler-pause-232147
	8de8790b20b2c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   29 seconds ago      Exited              coredns                   1                   bc0526a6cc5b6       coredns-6f6b679f8f-gvnqf
	a21d2be3b0654       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   29 seconds ago      Exited              kube-apiserver            1                   04773d646a480       kube-apiserver-pause-232147
	bd77b49b539b6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   29 seconds ago      Exited              etcd                      1                   61b76d0d2f4f6       etcd-pause-232147
	87de891f8b0c8       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   29 seconds ago      Exited              kube-proxy                1                   97a2ccc19121b       kube-proxy-ztskd
	
	
	==> coredns [8de8790b20b2cd7839093a8c87b5f67212ad9f970d60b7768d50e60d935439ae] <==
	
	
	==> coredns [fd064504364d8ab84157d5097535d715c95c99f52b5cfca602136ec82be83677] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44577 - 51715 "HINFO IN 572853228764530317.4687144521485707226. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021885695s
	
	
	==> describe nodes <==
	Name:               pause-232147
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-232147
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=pause-232147
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T19_58_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:58:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-232147
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:59:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:58:51 +0000   Mon, 19 Aug 2024 19:58:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:58:51 +0000   Mon, 19 Aug 2024 19:58:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:58:51 +0000   Mon, 19 Aug 2024 19:58:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:58:51 +0000   Mon, 19 Aug 2024 19:58:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.125
	  Hostname:    pause-232147
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 cec13e5b52894b7da1ee2640bfe5479a
	  System UUID:                cec13e5b-5289-4b7d-a1ee-2640bfe5479a
	  Boot ID:                    6d58c741-491b-4fca-9459-a14744ac1965
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-gvnqf                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     57s
	  kube-system                 etcd-pause-232147                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         62s
	  kube-system                 kube-apiserver-pause-232147             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-pause-232147    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-ztskd                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-pause-232147             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeAllocatableEnforced  69s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     68s (x7 over 69s)  kubelet          Node pause-232147 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  68s (x8 over 69s)  kubelet          Node pause-232147 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s (x8 over 69s)  kubelet          Node pause-232147 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s                kubelet          Node pause-232147 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  62s                kubelet          Node pause-232147 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s                kubelet          Node pause-232147 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeReady                61s                kubelet          Node pause-232147 status is now: NodeReady
	  Normal  RegisteredNode           58s                node-controller  Node pause-232147 event: Registered Node pause-232147 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  24s (x8 over 25s)  kubelet          Node pause-232147 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 25s)  kubelet          Node pause-232147 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 25s)  kubelet          Node pause-232147 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18s                node-controller  Node pause-232147 event: Registered Node pause-232147 in Controller
	
	
	==> dmesg <==
	[  +8.633352] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.060437] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063277] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.180529] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.146161] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.280481] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.365060] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.065319] kauditd_printk_skb: 130 callbacks suppressed
	[Aug19 19:58] systemd-fstab-generator[898]: Ignoring "noauto" option for root device
	[  +0.936858] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.641222] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[  +0.080958] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.328701] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +0.100226] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.264780] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.114436] systemd-fstab-generator[2031]: Ignoring "noauto" option for root device
	[  +0.220683] systemd-fstab-generator[2044]: Ignoring "noauto" option for root device
	[  +0.229901] systemd-fstab-generator[2058]: Ignoring "noauto" option for root device
	[  +0.211063] systemd-fstab-generator[2087]: Ignoring "noauto" option for root device
	[  +0.758930] systemd-fstab-generator[2421]: Ignoring "noauto" option for root device
	[  +1.201288] systemd-fstab-generator[2762]: Ignoring "noauto" option for root device
	[  +4.005541] systemd-fstab-generator[3303]: Ignoring "noauto" option for root device
	[  +0.081224] kauditd_printk_skb: 243 callbacks suppressed
	[  +7.604898] kauditd_printk_skb: 53 callbacks suppressed
	[Aug19 19:59] systemd-fstab-generator[3773]: Ignoring "noauto" option for root device
	
	
	==> etcd [a5c7b8274ac3fc5960f8c620b1c947e80091b2742eb2b7755dceec1f2fdca7cb] <==
	{"level":"info","ts":"2024-08-19T19:58:49.140677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 switched to configuration voters=(10250663014225178659)"}
	{"level":"info","ts":"2024-08-19T19:58:49.140734Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"40e9c4986db8cbc5","local-member-id":"8e41abb37b207023","added-peer-id":"8e41abb37b207023","added-peer-peer-urls":["https://192.168.50.125:2380"]}
	{"level":"info","ts":"2024-08-19T19:58:49.140844Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"40e9c4986db8cbc5","local-member-id":"8e41abb37b207023","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:58:49.140883Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:58:49.156436Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T19:58:49.156673Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"8e41abb37b207023","initial-advertise-peer-urls":["https://192.168.50.125:2380"],"listen-peer-urls":["https://192.168.50.125:2380"],"advertise-client-urls":["https://192.168.50.125:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.125:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T19:58:49.156712Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T19:58:49.156802Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.125:2380"}
	{"level":"info","ts":"2024-08-19T19:58:49.156823Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.125:2380"}
	{"level":"info","ts":"2024-08-19T19:58:50.298489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T19:58:50.298567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T19:58:50.298593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 received MsgPreVoteResp from 8e41abb37b207023 at term 2"}
	{"level":"info","ts":"2024-08-19T19:58:50.298607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T19:58:50.298614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 received MsgVoteResp from 8e41abb37b207023 at term 3"}
	{"level":"info","ts":"2024-08-19T19:58:50.298623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 became leader at term 3"}
	{"level":"info","ts":"2024-08-19T19:58:50.298630Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8e41abb37b207023 elected leader 8e41abb37b207023 at term 3"}
	{"level":"info","ts":"2024-08-19T19:58:50.305092Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8e41abb37b207023","local-member-attributes":"{Name:pause-232147 ClientURLs:[https://192.168.50.125:2379]}","request-path":"/0/members/8e41abb37b207023/attributes","cluster-id":"40e9c4986db8cbc5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T19:58:50.305110Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:58:50.305359Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:58:50.305767Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T19:58:50.305814Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T19:58:50.306472Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:58:50.306623Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:58:50.307402Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.125:2379"}
	{"level":"info","ts":"2024-08-19T19:58:50.307526Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [bd77b49b539b66437d83a597edae7053b4aa97458faf8bdca5e1291db42c82a2] <==
	
	
	==> kernel <==
	 19:59:12 up 1 min,  0 users,  load average: 1.47, 0.48, 0.17
	Linux pause-232147 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a21d2be3b0654f535a0b85beab522a910e60da1332a4ca91a69518797c176bc3] <==
	
	
	==> kube-apiserver [d36fe82de3b13311fd356d124ae973b46707e6e3b3adaa19a549738097deb26f] <==
	I0819 19:58:51.648498       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 19:58:51.648544       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 19:58:51.656881       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 19:58:51.657062       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 19:58:51.657082       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 19:58:51.657154       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 19:58:51.657182       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 19:58:51.657331       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 19:58:51.658168       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 19:58:51.658947       1 aggregator.go:171] initial CRD sync complete...
	I0819 19:58:51.658978       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 19:58:51.659009       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 19:58:51.659015       1 cache.go:39] Caches are synced for autoregister controller
	I0819 19:58:51.698745       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 19:58:51.713161       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 19:58:51.713254       1 policy_source.go:224] refreshing policies
	I0819 19:58:51.756119       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 19:58:52.556246       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 19:58:53.086677       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 19:58:53.111132       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 19:58:53.164861       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 19:58:53.213887       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 19:58:53.225141       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 19:58:55.314341       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 19:58:55.363710       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [36c09ba25a9801ff169f19647ad1fbdd22aaf654ac030c1f93a4b0c9a999e453] <==
	I0819 19:58:54.963617       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0819 19:58:54.963635       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0819 19:58:54.963643       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0819 19:58:54.963708       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-232147"
	I0819 19:58:54.963547       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0819 19:58:54.963822       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-232147"
	I0819 19:58:54.963864       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0819 19:58:54.964736       1 shared_informer.go:320] Caches are synced for ephemeral
	I0819 19:58:54.970487       1 shared_informer.go:320] Caches are synced for service account
	I0819 19:58:54.974351       1 shared_informer.go:320] Caches are synced for persistent volume
	I0819 19:58:55.026810       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="116.595727ms"
	I0819 19:58:55.027078       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="103.13µs"
	I0819 19:58:55.061004       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0819 19:58:55.068789       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0819 19:58:55.105632       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0819 19:58:55.116616       1 shared_informer.go:320] Caches are synced for endpoint
	I0819 19:58:55.137711       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 19:58:55.165306       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 19:58:55.210212       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0819 19:58:55.211825       1 shared_informer.go:320] Caches are synced for disruption
	I0819 19:58:55.589496       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 19:58:55.651784       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 19:58:55.651893       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0819 19:58:59.474816       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="19.167031ms"
	I0819 19:58:59.474909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="50.943µs"
	
	
	==> kube-controller-manager [ca316e974a78cefb52e09931ddf40c366eeaf95d3e88f530720554bdce23601f] <==
	I0819 19:58:45.346924       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-proxy [520f27ed06d74c8d72a71cb6b710931be15d57349dc84d6addf3f17e5d45e1e4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 19:58:52.385361       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 19:58:52.394751       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.125"]
	E0819 19:58:52.394933       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 19:58:52.429522       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 19:58:52.429622       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 19:58:52.429675       1 server_linux.go:169] "Using iptables Proxier"
	I0819 19:58:52.432196       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 19:58:52.432522       1 server.go:483] "Version info" version="v1.31.0"
	I0819 19:58:52.432572       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:58:52.433622       1 config.go:197] "Starting service config controller"
	I0819 19:58:52.433686       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 19:58:52.433721       1 config.go:104] "Starting endpoint slice config controller"
	I0819 19:58:52.433738       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 19:58:52.434268       1 config.go:326] "Starting node config controller"
	I0819 19:58:52.434339       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 19:58:52.533810       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 19:58:52.533881       1 shared_informer.go:320] Caches are synced for service config
	I0819 19:58:52.534429       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [87de891f8b0c8d70cd1ed1e52f9bb91ad789661b868c82d342085e78f45e8af0] <==
	
	
	==> kube-scheduler [97e775e2fa7659e0e7ef22311ebe63cdb630f6e9688a1c52957d409ff88c9606] <==
	I0819 19:58:45.693769       1 serving.go:386] Generated self-signed cert in-memory
	W0819 19:58:46.212587       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.168.50.125:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.50.125:8443: connect: connection refused
	W0819 19:58:46.212680       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 19:58:46.212705       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 19:58:46.219639       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 19:58:46.219727       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:58:46.221751       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0819 19:58:46.221864       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E0819 19:58:46.221970       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e94efaa503d71f37142cc657a5c12d83a68eaf26b8f8a39209f14d17e7aede30] <==
	I0819 19:58:49.747005       1 serving.go:386] Generated self-signed cert in-memory
	I0819 19:58:51.678200       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 19:58:51.678357       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:58:51.685371       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0819 19:58:51.685501       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0819 19:58:51.685605       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 19:58:51.685657       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 19:58:51.685691       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0819 19:58:51.685743       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0819 19:58:51.686607       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 19:58:51.686742       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 19:58:51.786592       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0819 19:58:51.786668       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0819 19:58:51.786758       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 19:58:48 pause-232147 kubelet[3310]: E0819 19:58:48.267625    3310 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.125:8443: connect: connection refused" node="pause-232147"
	Aug 19 19:58:48 pause-232147 kubelet[3310]: I0819 19:58:48.371615    3310 scope.go:117] "RemoveContainer" containerID="ca316e974a78cefb52e09931ddf40c366eeaf95d3e88f530720554bdce23601f"
	Aug 19 19:58:48 pause-232147 kubelet[3310]: I0819 19:58:48.372463    3310 scope.go:117] "RemoveContainer" containerID="97e775e2fa7659e0e7ef22311ebe63cdb630f6e9688a1c52957d409ff88c9606"
	Aug 19 19:58:48 pause-232147 kubelet[3310]: I0819 19:58:48.373063    3310 scope.go:117] "RemoveContainer" containerID="bd77b49b539b66437d83a597edae7053b4aa97458faf8bdca5e1291db42c82a2"
	Aug 19 19:58:48 pause-232147 kubelet[3310]: I0819 19:58:48.373270    3310 scope.go:117] "RemoveContainer" containerID="a21d2be3b0654f535a0b85beab522a910e60da1332a4ca91a69518797c176bc3"
	Aug 19 19:58:48 pause-232147 kubelet[3310]: E0819 19:58:48.489973    3310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-232147?timeout=10s\": dial tcp 192.168.50.125:8443: connect: connection refused" interval="800ms"
	Aug 19 19:58:48 pause-232147 kubelet[3310]: I0819 19:58:48.670395    3310 kubelet_node_status.go:72] "Attempting to register node" node="pause-232147"
	Aug 19 19:58:48 pause-232147 kubelet[3310]: E0819 19:58:48.671231    3310 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.125:8443: connect: connection refused" node="pause-232147"
	Aug 19 19:58:49 pause-232147 kubelet[3310]: I0819 19:58:49.472944    3310 kubelet_node_status.go:72] "Attempting to register node" node="pause-232147"
	Aug 19 19:58:51 pause-232147 kubelet[3310]: E0819 19:58:51.790377    3310 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-232147\" already exists" pod="kube-system/kube-controller-manager-pause-232147"
	Aug 19 19:58:51 pause-232147 kubelet[3310]: I0819 19:58:51.800911    3310 kubelet_node_status.go:111] "Node was previously registered" node="pause-232147"
	Aug 19 19:58:51 pause-232147 kubelet[3310]: I0819 19:58:51.801199    3310 kubelet_node_status.go:75] "Successfully registered node" node="pause-232147"
	Aug 19 19:58:51 pause-232147 kubelet[3310]: I0819 19:58:51.801293    3310 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 19 19:58:51 pause-232147 kubelet[3310]: I0819 19:58:51.802330    3310 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 19 19:58:51 pause-232147 kubelet[3310]: I0819 19:58:51.849875    3310 apiserver.go:52] "Watching apiserver"
	Aug 19 19:58:51 pause-232147 kubelet[3310]: I0819 19:58:51.880002    3310 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 19 19:58:51 pause-232147 kubelet[3310]: I0819 19:58:51.935109    3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4fa0745-fdef-4780-9b98-0a777d4cec90-xtables-lock\") pod \"kube-proxy-ztskd\" (UID: \"c4fa0745-fdef-4780-9b98-0a777d4cec90\") " pod="kube-system/kube-proxy-ztskd"
	Aug 19 19:58:51 pause-232147 kubelet[3310]: I0819 19:58:51.935259    3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4fa0745-fdef-4780-9b98-0a777d4cec90-lib-modules\") pod \"kube-proxy-ztskd\" (UID: \"c4fa0745-fdef-4780-9b98-0a777d4cec90\") " pod="kube-system/kube-proxy-ztskd"
	Aug 19 19:58:52 pause-232147 kubelet[3310]: I0819 19:58:52.153751    3310 scope.go:117] "RemoveContainer" containerID="87de891f8b0c8d70cd1ed1e52f9bb91ad789661b868c82d342085e78f45e8af0"
	Aug 19 19:58:52 pause-232147 kubelet[3310]: I0819 19:58:52.154257    3310 scope.go:117] "RemoveContainer" containerID="8de8790b20b2cd7839093a8c87b5f67212ad9f970d60b7768d50e60d935439ae"
	Aug 19 19:58:57 pause-232147 kubelet[3310]: E0819 19:58:57.963976    3310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097537963552255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:58:57 pause-232147 kubelet[3310]: E0819 19:58:57.964060    3310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097537963552255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:58:59 pause-232147 kubelet[3310]: I0819 19:58:59.439098    3310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 19 19:59:07 pause-232147 kubelet[3310]: E0819 19:59:07.965929    3310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097547965252728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:59:07 pause-232147 kubelet[3310]: E0819 19:59:07.965965    3310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724097547965252728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:59:11.361637  481980 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-430949/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-232147 -n pause-232147
helpers_test.go:261: (dbg) Run:  kubectl --context pause-232147 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (44.64s)

                                                
                                    

Test pass (176/222)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.51
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 4.68
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.14
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.61
22 TestOffline 58.83
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 195.43
31 TestAddons/serial/GCPAuth/Namespaces 0.14
33 TestAddons/parallel/Registry 15.89
35 TestAddons/parallel/InspektorGadget 11.63
37 TestAddons/parallel/HelmTiller 11.74
39 TestAddons/parallel/CSI 64.92
40 TestAddons/parallel/Headlamp 18.7
41 TestAddons/parallel/CloudSpanner 5.67
42 TestAddons/parallel/LocalPath 13.17
43 TestAddons/parallel/NvidiaDevicePlugin 5.52
44 TestAddons/parallel/Yakd 10.78
46 TestCertOptions 58.87
47 TestCertExpiration 352.09
49 TestForceSystemdFlag 67.68
50 TestForceSystemdEnv 68.02
52 TestKVMDriverInstallOrUpdate 5.14
56 TestErrorSpam/setup 41.8
57 TestErrorSpam/start 0.37
58 TestErrorSpam/status 0.73
59 TestErrorSpam/pause 1.54
60 TestErrorSpam/unpause 1.7
61 TestErrorSpam/stop 5.34
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 54.35
66 TestFunctional/serial/AuditLog 0
68 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.64
73 TestFunctional/serial/CacheCmd/cache/add_local 2.03
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.72
78 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/ExtraConfig 27.48
82 TestFunctional/serial/ComponentHealth 0.08
83 TestFunctional/serial/LogsCmd 1.25
84 TestFunctional/serial/LogsFileCmd 1.26
85 TestFunctional/serial/InvalidService 5.46
87 TestFunctional/parallel/ConfigCmd 0.36
88 TestFunctional/parallel/DashboardCmd 12.57
89 TestFunctional/parallel/DryRun 0.29
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 0.82
95 TestFunctional/parallel/ServiceCmdConnect 14.51
96 TestFunctional/parallel/AddonsCmd 0.15
97 TestFunctional/parallel/PersistentVolumeClaim 35.25
99 TestFunctional/parallel/SSHCmd 0.49
100 TestFunctional/parallel/CpCmd 1.32
101 TestFunctional/parallel/MySQL 23.44
102 TestFunctional/parallel/FileSync 0.2
103 TestFunctional/parallel/CertSync 1.38
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
111 TestFunctional/parallel/License 0.3
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
114 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.27
118 TestFunctional/parallel/ProfileCmd/profile_list 0.31
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
120 TestFunctional/parallel/ServiceCmd/DeployApp 13.22
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
127 TestFunctional/parallel/MountCmd/any-port 7.67
128 TestFunctional/parallel/ServiceCmd/List 0.47
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
131 TestFunctional/parallel/ServiceCmd/Format 0.3
132 TestFunctional/parallel/ServiceCmd/URL 0.34
133 TestFunctional/parallel/Version/short 0.05
134 TestFunctional/parallel/Version/components 0.5
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
139 TestFunctional/parallel/ImageCommands/ImageBuild 3.41
140 TestFunctional/parallel/ImageCommands/Setup 1.57
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.39
142 TestFunctional/parallel/MountCmd/specific-port 1.87
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.05
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.53
145 TestFunctional/parallel/MountCmd/VerifyCleanup 1.32
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.66
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.97
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.94
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 199.23
160 TestMultiControlPlane/serial/DeployApp 5.56
161 TestMultiControlPlane/serial/PingHostFromPods 1.21
162 TestMultiControlPlane/serial/AddWorkerNode 55.92
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
165 TestMultiControlPlane/serial/CopyFile 12.86
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
171 TestMultiControlPlane/serial/DeleteSecondaryNode 16.77
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
174 TestMultiControlPlane/serial/RestartCluster 460.22
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.4
176 TestMultiControlPlane/serial/AddSecondaryNode 73.42
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
181 TestJSONOutput/start/Command 56.08
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.68
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.61
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.35
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.2
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 86.92
213 TestMountStart/serial/StartWithMountFirst 28.67
214 TestMountStart/serial/VerifyMountFirst 0.39
215 TestMountStart/serial/StartWithMountSecond 24.13
216 TestMountStart/serial/VerifyMountSecond 0.38
217 TestMountStart/serial/DeleteFirst 0.91
218 TestMountStart/serial/VerifyMountPostDelete 0.39
219 TestMountStart/serial/Stop 1.28
220 TestMountStart/serial/RestartStopped 23.11
221 TestMountStart/serial/VerifyMountPostStop 0.38
224 TestMultiNode/serial/FreshStart2Nodes 110.71
225 TestMultiNode/serial/DeployApp2Nodes 5.26
226 TestMultiNode/serial/PingHostFrom2Pods 0.81
227 TestMultiNode/serial/AddNode 46.06
228 TestMultiNode/serial/MultiNodeLabels 0.07
229 TestMultiNode/serial/ProfileList 0.22
230 TestMultiNode/serial/CopyFile 7.37
231 TestMultiNode/serial/StopNode 2.27
232 TestMultiNode/serial/StartAfterStop 38.21
234 TestMultiNode/serial/DeleteNode 2.28
236 TestMultiNode/serial/RestartMultiNode 182.51
237 TestMultiNode/serial/ValidateNameConflict 40.62
244 TestScheduledStopUnix 114.84
248 TestRunningBinaryUpgrade 211.39
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
254 TestNoKubernetes/serial/StartWithK8s 90.55
274 TestPause/serial/Start 84.93
275 TestNoKubernetes/serial/StartWithStopK8s 67.6
277 TestNoKubernetes/serial/Start 34.27
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
279 TestNoKubernetes/serial/ProfileList 17.47
280 TestNoKubernetes/serial/Stop 1.65
281 TestNoKubernetes/serial/StartNoArgs 60.68
282 TestStoppedBinaryUpgrade/Setup 0.39
283 TestStoppedBinaryUpgrade/Upgrade 109.44
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
287 TestStoppedBinaryUpgrade/MinikubeLogs 0.9
x
+
TestDownloadOnly/v1.20.0/json-events (12.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-873673 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-873673 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.514257746s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (12.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-873673
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-873673: exit status 85 (61.261418ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-873673 | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC |          |
	|         | -p download-only-873673        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:36:21
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:36:21.883683  438171 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:36:21.883956  438171 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:36:21.883964  438171 out.go:358] Setting ErrFile to fd 2...
	I0819 18:36:21.883969  438171 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:36:21.884162  438171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	W0819 18:36:21.884298  438171 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19423-430949/.minikube/config/config.json: open /home/jenkins/minikube-integration/19423-430949/.minikube/config/config.json: no such file or directory
	I0819 18:36:21.884956  438171 out.go:352] Setting JSON to true
	I0819 18:36:21.885985  438171 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8333,"bootTime":1724084249,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:36:21.886068  438171 start.go:139] virtualization: kvm guest
	I0819 18:36:21.888361  438171 out.go:97] [download-only-873673] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0819 18:36:21.888520  438171 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 18:36:21.888575  438171 notify.go:220] Checking for updates...
	I0819 18:36:21.889997  438171 out.go:169] MINIKUBE_LOCATION=19423
	I0819 18:36:21.891470  438171 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:36:21.892775  438171 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 18:36:21.894229  438171 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 18:36:21.895461  438171 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0819 18:36:21.897920  438171 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 18:36:21.898238  438171 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 18:36:21.936095  438171 out.go:97] Using the kvm2 driver based on user configuration
	I0819 18:36:21.936132  438171 start.go:297] selected driver: kvm2
	I0819 18:36:21.936141  438171 start.go:901] validating driver "kvm2" against <nil>
	I0819 18:36:21.936538  438171 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:36:21.936674  438171 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-430949/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:36:21.953634  438171 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:36:21.953720  438171 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 18:36:21.954476  438171 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0819 18:36:21.954700  438171 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 18:36:21.954770  438171 cni.go:84] Creating CNI manager for ""
	I0819 18:36:21.954809  438171 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:36:21.954821  438171 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 18:36:21.954896  438171 start.go:340] cluster config:
	{Name:download-only-873673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-873673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:36:21.955190  438171 iso.go:125] acquiring lock: {Name:mk1d5b917be7292d760e387e491d6f38ca3dd7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:36:21.956997  438171 out.go:97] Downloading VM boot image ...
	I0819 18:36:21.957049  438171 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19423-430949/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 18:36:24.458476  438171 out.go:97] Starting "download-only-873673" primary control-plane node in "download-only-873673" cluster
	I0819 18:36:24.458509  438171 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 18:36:24.481935  438171 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:36:24.481976  438171 cache.go:56] Caching tarball of preloaded images
	I0819 18:36:24.482122  438171 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 18:36:24.483640  438171 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 18:36:24.483655  438171 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 18:36:24.510273  438171 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19423-430949/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-873673 host does not exist
	  To start a cluster, run: "minikube start -p download-only-873673"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-873673
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (4.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-087609 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-087609 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.678429319s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (4.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-087609
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-087609: exit status 85 (61.691944ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-873673 | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC |                     |
	|         | -p download-only-873673        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC | 19 Aug 24 18:36 UTC |
	| delete  | -p download-only-873673        | download-only-873673 | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC | 19 Aug 24 18:36 UTC |
	| start   | -o=json --download-only        | download-only-087609 | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC |                     |
	|         | -p download-only-087609        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:36:34
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:36:34.730835  438391 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:36:34.731128  438391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:36:34.731137  438391 out.go:358] Setting ErrFile to fd 2...
	I0819 18:36:34.731143  438391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:36:34.731328  438391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 18:36:34.731945  438391 out.go:352] Setting JSON to true
	I0819 18:36:34.732943  438391 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8346,"bootTime":1724084249,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:36:34.733012  438391 start.go:139] virtualization: kvm guest
	I0819 18:36:34.735161  438391 out.go:97] [download-only-087609] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:36:34.735346  438391 notify.go:220] Checking for updates...
	I0819 18:36:34.736801  438391 out.go:169] MINIKUBE_LOCATION=19423
	I0819 18:36:34.738190  438391 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:36:34.739434  438391 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 18:36:34.740681  438391 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 18:36:34.741857  438391 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-087609 host does not exist
	  To start a cluster, run: "minikube start -p download-only-087609"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-087609
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-219006 --alsologtostderr --binary-mirror http://127.0.0.1:40397 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-219006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-219006
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (58.83s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-791573 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-791573 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (57.805038801s)
helpers_test.go:175: Cleaning up "offline-crio-791573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-791573
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-791573: (1.029311514s)
--- PASS: TestOffline (58.83s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-966657
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-966657: exit status 85 (50.397152ms)

                                                
                                                
-- stdout --
	* Profile "addons-966657" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-966657"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-966657
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-966657: exit status 85 (51.034906ms)

                                                
                                                
-- stdout --
	* Profile "addons-966657" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-966657"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (195.43s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-966657 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-966657 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m15.434677227s)
--- PASS: TestAddons/Setup (195.43s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-966657 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-966657 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.848035ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-x89qh" [29139ceb-43bf-40ed-8a00-81e990604d2f] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003435329s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jwchm" [b551e7e6-c198-454e-a913-a278aaa5bf0b] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004061319s
addons_test.go:342: (dbg) Run:  kubectl --context addons-966657 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-966657 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-966657 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.996675141s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-966657 ip
2024/08/19 18:40:29 [DEBUG] GET http://192.168.39.241:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-966657 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.89s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.63s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7hdtw" [e36f38c0-a82b-4c86-a1b8-f8d262f7cec5] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.085417394s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-966657
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-966657: (6.538936618s)
--- PASS: TestAddons/parallel/InspektorGadget (11.63s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.82932ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-vfspv" [6000c6c1-2382-4395-9752-1b553c6bd0a2] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.003739525s
addons_test.go:475: (dbg) Run:  kubectl --context addons-966657 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-966657 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.052990609s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-966657 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 6.429333ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-966657 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-966657 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8ea9eb57-2fe7-46fa-a63f-b550ddc0f351] Pending
helpers_test.go:344: "task-pv-pod" [8ea9eb57-2fe7-46fa-a63f-b550ddc0f351] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8ea9eb57-2fe7-46fa-a63f-b550ddc0f351] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004385559s
addons_test.go:590: (dbg) Run:  kubectl --context addons-966657 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-966657 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-966657 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-966657 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-966657 delete pod task-pv-pod: (1.042885528s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-966657 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-966657 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-966657 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d4de93e6-25dd-4daf-b88e-22932201d6cc] Pending
helpers_test.go:344: "task-pv-pod-restore" [d4de93e6-25dd-4daf-b88e-22932201d6cc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d4de93e6-25dd-4daf-b88e-22932201d6cc] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004728402s
addons_test.go:632: (dbg) Run:  kubectl --context addons-966657 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-966657 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-966657 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-966657 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-966657 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.776956799s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-966657 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (64.92s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-966657 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-9kcwd" [ddfa2716-929d-4d9c-84ef-5c7ecc9ac4f8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-9kcwd" [ddfa2716-929d-4d9c-84ef-5c7ecc9ac4f8] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004363583s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-966657 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-966657 addons disable headlamp --alsologtostderr -v=1: (5.725364398s)
--- PASS: TestAddons/parallel/Headlamp (18.70s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-wxqzl" [108666b8-f9a4-4dfa-968e-deea2ab1878a] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004285954s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-966657
--- PASS: TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-966657 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-966657 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966657 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2bf1308c-964c-45ae-886c-3aaa5cddbf96] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2bf1308c-964c-45ae-886c-3aaa5cddbf96] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2bf1308c-964c-45ae-886c-3aaa5cddbf96] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004368778s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-966657 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-966657 ssh "cat /opt/local-path-provisioner/pvc-bdc7ef98-d7dd-48c4-baf5-5803f9aa11e7_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-966657 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-966657 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-966657 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (13.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-pndfn" [c413c9e7-9614-44c5-9845-3d2b40c62cba] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.008237224s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-966657
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-7ksbs" [b447dde9-f3b3-4964-860e-b0cd604c7e2c] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004770585s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-966657 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-966657 addons disable yakd --alsologtostderr -v=1: (5.772745827s)
--- PASS: TestAddons/parallel/Yakd (10.78s)

                                                
                                    
x
+
TestCertOptions (58.87s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-319379 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-319379 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (57.373482644s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-319379 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-319379 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-319379 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-319379" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-319379
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-319379: (1.022662726s)
--- PASS: TestCertOptions (58.87s)

                                                
                                    
x
+
TestCertExpiration (352.09s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-228973 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-228973 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m34.105361512s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-228973 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-228973 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m16.973096905s)
helpers_test.go:175: Cleaning up "cert-expiration-228973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-228973
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-228973: (1.012910622s)
--- PASS: TestCertExpiration (352.09s)

                                                
                                    
x
+
TestForceSystemdFlag (67.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-696812 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-696812 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m5.774521887s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-696812 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-696812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-696812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-696812: (1.697038805s)
--- PASS: TestForceSystemdFlag (67.68s)

                                                
                                    
x
+
TestForceSystemdEnv (68.02s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-899747 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-899747 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m7.011011202s)
helpers_test.go:175: Cleaning up "force-systemd-env-899747" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-899747
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-899747: (1.010971195s)
--- PASS: TestForceSystemdEnv (68.02s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.14s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.14s)

                                                
                                    
x
+
TestErrorSpam/setup (41.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-212543 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-212543 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-212543 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-212543 --driver=kvm2  --container-runtime=crio: (41.803591896s)
--- PASS: TestErrorSpam/setup (41.80s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212543 --log_dir /tmp/nospam-212543 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212543 --log_dir /tmp/nospam-212543 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212543 --log_dir /tmp/nospam-212543 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212543 --log_dir /tmp/nospam-212543 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212543 --log_dir /tmp/nospam-212543 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212543 --log_dir /tmp/nospam-212543 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212543 --log_dir /tmp/nospam-212543 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212543 --log_dir /tmp/nospam-212543 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212543 --log_dir /tmp/nospam-212543 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212543 --log_dir /tmp/nospam-212543 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212543 --log_dir /tmp/nospam-212543 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212543 --log_dir /tmp/nospam-212543 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (5.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212543 --log_dir /tmp/nospam-212543 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-212543 --log_dir /tmp/nospam-212543 stop: (1.618439029s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212543 --log_dir /tmp/nospam-212543 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-212543 --log_dir /tmp/nospam-212543 stop: (1.967497451s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212543 --log_dir /tmp/nospam-212543 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-212543 --log_dir /tmp/nospam-212543 stop: (1.754996884s)
--- PASS: TestErrorSpam/stop (5.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19423-430949/.minikube/files/etc/test/nested/copy/438159/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.35s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-124593 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0819 18:49:56.397914  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:56.405121  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:56.416674  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:56.438237  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:56.479748  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:56.561319  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:56.722942  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-124593 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (54.352742471s)
--- PASS: TestFunctional/serial/StartWithProxy (54.35s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-124593 cache add registry.k8s.io/pause:3.1: (1.112423624s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-124593 cache add registry.k8s.io/pause:3.3: (1.210575144s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-124593 cache add registry.k8s.io/pause:latest: (1.315838959s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-124593 /tmp/TestFunctionalserialCacheCmdcacheadd_local2170479746/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 cache add minikube-local-cache-test:functional-124593
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-124593 cache add minikube-local-cache-test:functional-124593: (1.693202208s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 cache delete minikube-local-cache-test:functional-124593
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-124593
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-124593 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (212.169582ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-124593 cache reload: (1.043812591s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (27.48s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-124593 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-124593 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (27.476137393s)
functional_test.go:761: restart took 27.476243365s for "functional-124593" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (27.48s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-124593 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-124593 logs: (1.24939678s)
--- PASS: TestFunctional/serial/LogsCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 logs --file /tmp/TestFunctionalserialLogsFileCmd1656838777/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-124593 logs --file /tmp/TestFunctionalserialLogsFileCmd1656838777/001/logs.txt: (1.260827521s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.46s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-124593 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-124593
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-124593: exit status 115 (284.173412ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.22:31942 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-124593 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-124593 delete -f testdata/invalidsvc.yaml: (1.96274934s)
--- PASS: TestFunctional/serial/InvalidService (5.46s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-124593 config get cpus: exit status 14 (58.463582ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-124593 config get cpus: exit status 14 (51.762984ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-124593 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-124593 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 450098: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.57s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-124593 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-124593 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (152.49232ms)

                                                
                                                
-- stdout --
	* [functional-124593] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:04:54.629053  449883 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:04:54.629623  449883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:04:54.629637  449883 out.go:358] Setting ErrFile to fd 2...
	I0819 19:04:54.629645  449883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:04:54.629892  449883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:04:54.630460  449883 out.go:352] Setting JSON to false
	I0819 19:04:54.631547  449883 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10046,"bootTime":1724084249,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:04:54.631615  449883 start.go:139] virtualization: kvm guest
	I0819 19:04:54.633859  449883 out.go:177] * [functional-124593] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:04:54.635099  449883 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 19:04:54.635103  449883 notify.go:220] Checking for updates...
	I0819 19:04:54.637625  449883 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:04:54.638959  449883 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:04:54.640198  449883 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:04:54.641494  449883 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:04:54.642812  449883 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:04:54.644481  449883 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:04:54.644915  449883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:04:54.644983  449883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:04:54.661174  449883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33273
	I0819 19:04:54.661787  449883 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:04:54.662668  449883 main.go:141] libmachine: Using API Version  1
	I0819 19:04:54.662713  449883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:04:54.663606  449883 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:04:54.663833  449883 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 19:04:54.664089  449883 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 19:04:54.664383  449883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:04:54.664415  449883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:04:54.682439  449883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36781
	I0819 19:04:54.682929  449883 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:04:54.683715  449883 main.go:141] libmachine: Using API Version  1
	I0819 19:04:54.683753  449883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:04:54.684200  449883 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:04:54.684405  449883 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 19:04:54.720366  449883 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 19:04:54.721820  449883 start.go:297] selected driver: kvm2
	I0819 19:04:54.721851  449883 start.go:901] validating driver "kvm2" against &{Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:04:54.721973  449883 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:04:54.724181  449883 out.go:201] 
	W0819 19:04:54.725791  449883 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 19:04:54.727088  449883 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-124593 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-124593 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-124593 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (139.839857ms)

                                                
                                                
-- stdout --
	* [functional-124593] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:04:50.557281  449425 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:04:50.557409  449425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:04:50.557422  449425 out.go:358] Setting ErrFile to fd 2...
	I0819 19:04:50.557428  449425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:04:50.557746  449425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:04:50.558321  449425 out.go:352] Setting JSON to false
	I0819 19:04:50.559258  449425 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10042,"bootTime":1724084249,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:04:50.559328  449425 start.go:139] virtualization: kvm guest
	I0819 19:04:50.561502  449425 out.go:177] * [functional-124593] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0819 19:04:50.563118  449425 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 19:04:50.563197  449425 notify.go:220] Checking for updates...
	I0819 19:04:50.565484  449425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:04:50.566860  449425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	I0819 19:04:50.568274  449425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	I0819 19:04:50.569565  449425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:04:50.570762  449425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:04:50.572501  449425 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:04:50.573014  449425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:04:50.573097  449425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:04:50.588250  449425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42371
	I0819 19:04:50.588716  449425 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:04:50.589311  449425 main.go:141] libmachine: Using API Version  1
	I0819 19:04:50.589337  449425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:04:50.589711  449425 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:04:50.589907  449425 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 19:04:50.590199  449425 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 19:04:50.590550  449425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:04:50.590619  449425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:04:50.606380  449425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34105
	I0819 19:04:50.606853  449425 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:04:50.607376  449425 main.go:141] libmachine: Using API Version  1
	I0819 19:04:50.607392  449425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:04:50.607831  449425 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:04:50.608080  449425 main.go:141] libmachine: (functional-124593) Calling .DriverName
	I0819 19:04:50.643112  449425 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0819 19:04:50.644271  449425 start.go:297] selected driver: kvm2
	I0819 19:04:50.644301  449425 start.go:901] validating driver "kvm2" against &{Name:functional-124593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-124593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:04:50.644453  449425 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:04:50.646804  449425 out.go:201] 
	W0819 19:04:50.648031  449425 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 19:04:50.649102  449425 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (14.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-124593 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-124593 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-87rkf" [12da4ea6-f37f-4870-8cb4-180985d63872] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-87rkf" [12da4ea6-f37f-4870-8cb4-180985d63872] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 14.004248887s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.22:32537
functional_test.go:1675: http://192.168.39.22:32537: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-87rkf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.22:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.22:32537
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (14.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (35.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6c390b22-3b82-4f13-8bd2-883299635128] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004707305s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-124593 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-124593 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-124593 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-124593 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-124593 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b91548ea-a975-4e04-8fd6-5d7432f47df9] Pending
helpers_test.go:344: "sp-pod" [b91548ea-a975-4e04-8fd6-5d7432f47df9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b91548ea-a975-4e04-8fd6-5d7432f47df9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.010386817s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-124593 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-124593 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-124593 delete -f testdata/storage-provisioner/pod.yaml: (1.703132004s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-124593 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5eeb6eab-df59-4e48-953f-94784b2f45ae] Pending
helpers_test.go:344: "sp-pod" [5eeb6eab-df59-4e48-953f-94784b2f45ae] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5eeb6eab-df59-4e48-953f-94784b2f45ae] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004167866s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-124593 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (35.25s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh -n functional-124593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 cp functional-124593:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd156246205/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh -n functional-124593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh -n functional-124593 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-124593 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
2024/08/19 19:05:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "mysql-6cdb49bbb-v5qhb" [e62f31da-21c5-4645-90bf-3b3787861bcc] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-v5qhb" [e62f31da-21c5-4645-90bf-3b3787861bcc] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.004719506s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-124593 exec mysql-6cdb49bbb-v5qhb -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-124593 exec mysql-6cdb49bbb-v5qhb -- mysql -ppassword -e "show databases;": exit status 1 (178.256081ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-124593 exec mysql-6cdb49bbb-v5qhb -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-124593 exec mysql-6cdb49bbb-v5qhb -- mysql -ppassword -e "show databases;": exit status 1 (136.21057ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-124593 exec mysql-6cdb49bbb-v5qhb -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-124593 exec mysql-6cdb49bbb-v5qhb -- mysql -ppassword -e "show databases;": exit status 1 (144.164621ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-124593 exec mysql-6cdb49bbb-v5qhb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/438159/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "sudo cat /etc/test/nested/copy/438159/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/438159.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "sudo cat /etc/ssl/certs/438159.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/438159.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "sudo cat /usr/share/ca-certificates/438159.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/4381592.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "sudo cat /etc/ssl/certs/4381592.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/4381592.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "sudo cat /usr/share/ca-certificates/4381592.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-124593 ssh "sudo systemctl is-active docker": exit status 1 (215.844014ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-124593 ssh "sudo systemctl is-active containerd": exit status 1 (221.116013ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-124593 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-124593 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-124593 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-124593 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 448861: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-124593 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-124593 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e23cea6e-b5eb-44d1-a60a-0589faee104e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [e23cea6e-b5eb-44d1-a60a-0589faee104e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004101782s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "259.579157ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "47.188702ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "262.790464ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "52.848722ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-124593 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-124593 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-qc986" [ddd55849-ad70-4d7f-be62-906aaf700473] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-qc986" [ddd55849-ad70-4d7f-be62-906aaf700473] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.004621504s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-124593 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.106.199 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-124593 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-124593 /tmp/TestFunctionalparallelMountCmdany-port520646391/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724094290655195126" to /tmp/TestFunctionalparallelMountCmdany-port520646391/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724094290655195126" to /tmp/TestFunctionalparallelMountCmdany-port520646391/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724094290655195126" to /tmp/TestFunctionalparallelMountCmdany-port520646391/001/test-1724094290655195126
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-124593 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (283.649786ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 19 19:04 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 19 19:04 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 19 19:04 test-1724094290655195126
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh cat /mount-9p/test-1724094290655195126
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-124593 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [23f3f036-ffe3-4d51-982f-d0ad3229b7f6] Pending
helpers_test.go:344: "busybox-mount" [23f3f036-ffe3-4d51-982f-d0ad3229b7f6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [23f3f036-ffe3-4d51-982f-d0ad3229b7f6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0819 19:04:56.397711  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [23f3f036-ffe3-4d51-982f-d0ad3229b7f6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004190867s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-124593 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-124593 /tmp/TestFunctionalparallelMountCmdany-port520646391/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 service list -o json
functional_test.go:1494: Took "440.225874ms" to run "out/minikube-linux-amd64 -p functional-124593 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.22:31001
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.22:31001
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-124593 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-124593
localhost/kicbase/echo-server:functional-124593
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-124593 image ls --format short --alsologtostderr:
I0819 19:05:08.347927  451563 out.go:345] Setting OutFile to fd 1 ...
I0819 19:05:08.348205  451563 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:05:08.348215  451563 out.go:358] Setting ErrFile to fd 2...
I0819 19:05:08.348219  451563 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:05:08.348418  451563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
I0819 19:05:08.349037  451563 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 19:05:08.349161  451563 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 19:05:08.349580  451563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 19:05:08.349639  451563 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 19:05:08.366699  451563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37495
I0819 19:05:08.367269  451563 main.go:141] libmachine: () Calling .GetVersion
I0819 19:05:08.367866  451563 main.go:141] libmachine: Using API Version  1
I0819 19:05:08.367889  451563 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 19:05:08.368219  451563 main.go:141] libmachine: () Calling .GetMachineName
I0819 19:05:08.368445  451563 main.go:141] libmachine: (functional-124593) Calling .GetState
I0819 19:05:08.370277  451563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 19:05:08.370326  451563 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 19:05:08.385924  451563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32975
I0819 19:05:08.386381  451563 main.go:141] libmachine: () Calling .GetVersion
I0819 19:05:08.386948  451563 main.go:141] libmachine: Using API Version  1
I0819 19:05:08.386966  451563 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 19:05:08.387429  451563 main.go:141] libmachine: () Calling .GetMachineName
I0819 19:05:08.387687  451563 main.go:141] libmachine: (functional-124593) Calling .DriverName
I0819 19:05:08.387932  451563 ssh_runner.go:195] Run: systemctl --version
I0819 19:05:08.387957  451563 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
I0819 19:05:08.390865  451563 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
I0819 19:05:08.391296  451563 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
I0819 19:05:08.391333  451563 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
I0819 19:05:08.391439  451563 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
I0819 19:05:08.391632  451563 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
I0819 19:05:08.391810  451563 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
I0819 19:05:08.391961  451563 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
I0819 19:05:08.475805  451563 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 19:05:08.515797  451563 main.go:141] libmachine: Making call to close driver server
I0819 19:05:08.515817  451563 main.go:141] libmachine: (functional-124593) Calling .Close
I0819 19:05:08.516091  451563 main.go:141] libmachine: Successfully made call to close driver server
I0819 19:05:08.516110  451563 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 19:05:08.516118  451563 main.go:141] libmachine: Making call to close driver server
I0819 19:05:08.516127  451563 main.go:141] libmachine: (functional-124593) Calling .Close
I0819 19:05:08.516334  451563 main.go:141] libmachine: Successfully made call to close driver server
I0819 19:05:08.516349  451563 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-124593 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| localhost/minikube-local-cache-test     | functional-124593  | eea6c77373e59 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | latest             | 5ef79149e0ec8 | 192MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-124593  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| docker.io/library/nginx                 | alpine             | 0f0eda053dc5c | 44.7MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-124593 image ls --format table --alsologtostderr:
I0819 19:05:08.910292  451694 out.go:345] Setting OutFile to fd 1 ...
I0819 19:05:08.910577  451694 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:05:08.910588  451694 out.go:358] Setting ErrFile to fd 2...
I0819 19:05:08.910594  451694 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:05:08.910803  451694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
I0819 19:05:08.911460  451694 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 19:05:08.911586  451694 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 19:05:08.911953  451694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 19:05:08.912008  451694 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 19:05:08.928099  451694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45719
I0819 19:05:08.928603  451694 main.go:141] libmachine: () Calling .GetVersion
I0819 19:05:08.929223  451694 main.go:141] libmachine: Using API Version  1
I0819 19:05:08.929246  451694 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 19:05:08.929637  451694 main.go:141] libmachine: () Calling .GetMachineName
I0819 19:05:08.929866  451694 main.go:141] libmachine: (functional-124593) Calling .GetState
I0819 19:05:08.932030  451694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 19:05:08.932077  451694 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 19:05:08.947529  451694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43293
I0819 19:05:08.948021  451694 main.go:141] libmachine: () Calling .GetVersion
I0819 19:05:08.948531  451694 main.go:141] libmachine: Using API Version  1
I0819 19:05:08.948557  451694 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 19:05:08.948959  451694 main.go:141] libmachine: () Calling .GetMachineName
I0819 19:05:08.949200  451694 main.go:141] libmachine: (functional-124593) Calling .DriverName
I0819 19:05:08.949430  451694 ssh_runner.go:195] Run: systemctl --version
I0819 19:05:08.949459  451694 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
I0819 19:05:08.952419  451694 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
I0819 19:05:08.952803  451694 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
I0819 19:05:08.952828  451694 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
I0819 19:05:08.953013  451694 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
I0819 19:05:08.953237  451694 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
I0819 19:05:08.953425  451694 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
I0819 19:05:08.953631  451694 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
I0819 19:05:09.043530  451694 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 19:05:09.086364  451694 main.go:141] libmachine: Making call to close driver server
I0819 19:05:09.086382  451694 main.go:141] libmachine: (functional-124593) Calling .Close
I0819 19:05:09.086664  451694 main.go:141] libmachine: Successfully made call to close driver server
I0819 19:05:09.086682  451694 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 19:05:09.086692  451694 main.go:141] libmachine: Making call to close driver server
I0819 19:05:09.086700  451694 main.go:141] libmachine: (functional-124593) Calling .Close
I0819 19:05:09.086974  451694 main.go:141] libmachine: Successfully made call to close driver server
I0819 19:05:09.087043  451694 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 19:05:09.086994  451694 main.go:141] libmachine: (functional-124593) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-124593 image ls --format json --alsologtostderr:
[{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aa
a069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a","repoDigests":["docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0","docker.io/library/nginx@sha25
6:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44668625"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f"],"repoTags":["docker.io/library/nginx:latest"],"size":"191841612"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787
b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"eea6c77373e5935a0d5c653c1ed54ca2ad52952dc1da3ab0919b194f8f292edc","repoDigests":["localhost/minikube-local-cache-test@sha256:64fabb98078c4760883ebdc24e5e1cfe1278ef827c2d6614bd130cf7621cd028"],"repoTags":["localhost/minikube-local-cache-test:functional-124593"],"size":"3330"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","r
epoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef12
0ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-124593"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287
463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-124593 image ls --format json --alsologtostderr:
I0819 19:05:08.687726  451646 out.go:345] Setting OutFile to fd 1 ...
I0819 19:05:08.688048  451646 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:05:08.688060  451646 out.go:358] Setting ErrFile to fd 2...
I0819 19:05:08.688065  451646 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:05:08.688236  451646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
I0819 19:05:08.688870  451646 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 19:05:08.688977  451646 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 19:05:08.689467  451646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 19:05:08.689517  451646 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 19:05:08.704994  451646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40875
I0819 19:05:08.705545  451646 main.go:141] libmachine: () Calling .GetVersion
I0819 19:05:08.706246  451646 main.go:141] libmachine: Using API Version  1
I0819 19:05:08.706276  451646 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 19:05:08.706608  451646 main.go:141] libmachine: () Calling .GetMachineName
I0819 19:05:08.706808  451646 main.go:141] libmachine: (functional-124593) Calling .GetState
I0819 19:05:08.708688  451646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 19:05:08.708731  451646 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 19:05:08.724968  451646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35557
I0819 19:05:08.725546  451646 main.go:141] libmachine: () Calling .GetVersion
I0819 19:05:08.726083  451646 main.go:141] libmachine: Using API Version  1
I0819 19:05:08.726105  451646 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 19:05:08.726431  451646 main.go:141] libmachine: () Calling .GetMachineName
I0819 19:05:08.726611  451646 main.go:141] libmachine: (functional-124593) Calling .DriverName
I0819 19:05:08.726862  451646 ssh_runner.go:195] Run: systemctl --version
I0819 19:05:08.726893  451646 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
I0819 19:05:08.730056  451646 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
I0819 19:05:08.730578  451646 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
I0819 19:05:08.730614  451646 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
I0819 19:05:08.730767  451646 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
I0819 19:05:08.730982  451646 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
I0819 19:05:08.731155  451646 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
I0819 19:05:08.731331  451646 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
I0819 19:05:08.812475  451646 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 19:05:08.856598  451646 main.go:141] libmachine: Making call to close driver server
I0819 19:05:08.856611  451646 main.go:141] libmachine: (functional-124593) Calling .Close
I0819 19:05:08.856923  451646 main.go:141] libmachine: Successfully made call to close driver server
I0819 19:05:08.856942  451646 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 19:05:08.856957  451646 main.go:141] libmachine: Making call to close driver server
I0819 19:05:08.856960  451646 main.go:141] libmachine: (functional-124593) DBG | Closing plugin on server side
I0819 19:05:08.856966  451646 main.go:141] libmachine: (functional-124593) Calling .Close
I0819 19:05:08.857244  451646 main.go:141] libmachine: Successfully made call to close driver server
I0819 19:05:08.857267  451646 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 19:05:08.857276  451646 main.go:141] libmachine: (functional-124593) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-124593 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-124593
size: "4943877"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: eea6c77373e5935a0d5c653c1ed54ca2ad52952dc1da3ab0919b194f8f292edc
repoDigests:
- localhost/minikube-local-cache-test@sha256:64fabb98078c4760883ebdc24e5e1cfe1278ef827c2d6614bd130cf7621cd028
repoTags:
- localhost/minikube-local-cache-test:functional-124593
size: "3330"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f
repoTags:
- docker.io/library/nginx:latest
size: "191841612"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a
repoDigests:
- docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "44668625"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-124593 image ls --format yaml --alsologtostderr:
I0819 19:05:08.458216  451592 out.go:345] Setting OutFile to fd 1 ...
I0819 19:05:08.458342  451592 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:05:08.458350  451592 out.go:358] Setting ErrFile to fd 2...
I0819 19:05:08.458355  451592 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:05:08.458550  451592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
I0819 19:05:08.459121  451592 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 19:05:08.459219  451592 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 19:05:08.459578  451592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 19:05:08.459624  451592 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 19:05:08.475298  451592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39809
I0819 19:05:08.475867  451592 main.go:141] libmachine: () Calling .GetVersion
I0819 19:05:08.476531  451592 main.go:141] libmachine: Using API Version  1
I0819 19:05:08.476559  451592 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 19:05:08.477020  451592 main.go:141] libmachine: () Calling .GetMachineName
I0819 19:05:08.477257  451592 main.go:141] libmachine: (functional-124593) Calling .GetState
I0819 19:05:08.479277  451592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 19:05:08.479322  451592 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 19:05:08.494933  451592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
I0819 19:05:08.495390  451592 main.go:141] libmachine: () Calling .GetVersion
I0819 19:05:08.495933  451592 main.go:141] libmachine: Using API Version  1
I0819 19:05:08.495973  451592 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 19:05:08.496380  451592 main.go:141] libmachine: () Calling .GetMachineName
I0819 19:05:08.496655  451592 main.go:141] libmachine: (functional-124593) Calling .DriverName
I0819 19:05:08.496878  451592 ssh_runner.go:195] Run: systemctl --version
I0819 19:05:08.496915  451592 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
I0819 19:05:08.499926  451592 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
I0819 19:05:08.500349  451592 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
I0819 19:05:08.500398  451592 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
I0819 19:05:08.500589  451592 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
I0819 19:05:08.500797  451592 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
I0819 19:05:08.501004  451592 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
I0819 19:05:08.501209  451592 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
I0819 19:05:08.595826  451592 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 19:05:08.637712  451592 main.go:141] libmachine: Making call to close driver server
I0819 19:05:08.637733  451592 main.go:141] libmachine: (functional-124593) Calling .Close
I0819 19:05:08.638050  451592 main.go:141] libmachine: Successfully made call to close driver server
I0819 19:05:08.638070  451592 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 19:05:08.638087  451592 main.go:141] libmachine: Making call to close driver server
I0819 19:05:08.638096  451592 main.go:141] libmachine: (functional-124593) Calling .Close
I0819 19:05:08.638323  451592 main.go:141] libmachine: Successfully made call to close driver server
I0819 19:05:08.638342  451592 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-124593 ssh pgrep buildkitd: exit status 1 (205.680557ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 image build -t localhost/my-image:functional-124593 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-124593 image build -t localhost/my-image:functional-124593 testdata/build --alsologtostderr: (2.811420041s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-124593 image build -t localhost/my-image:functional-124593 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d091eee3259
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-124593
--> 3148bd41c13
Successfully tagged localhost/my-image:functional-124593
3148bd41c13555b56ee57ca1b633beb2751f4e6ac94c06e9c955abd741032e21
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-124593 image build -t localhost/my-image:functional-124593 testdata/build --alsologtostderr:
I0819 19:05:08.771848  451670 out.go:345] Setting OutFile to fd 1 ...
I0819 19:05:08.772038  451670 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:05:08.772052  451670 out.go:358] Setting ErrFile to fd 2...
I0819 19:05:08.772059  451670 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 19:05:08.772247  451670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
I0819 19:05:08.772883  451670 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 19:05:08.773462  451670 config.go:182] Loaded profile config "functional-124593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 19:05:08.774518  451670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 19:05:08.774582  451670 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 19:05:08.790916  451670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34363
I0819 19:05:08.791476  451670 main.go:141] libmachine: () Calling .GetVersion
I0819 19:05:08.792072  451670 main.go:141] libmachine: Using API Version  1
I0819 19:05:08.792099  451670 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 19:05:08.792484  451670 main.go:141] libmachine: () Calling .GetMachineName
I0819 19:05:08.792757  451670 main.go:141] libmachine: (functional-124593) Calling .GetState
I0819 19:05:08.794580  451670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 19:05:08.794612  451670 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 19:05:08.810406  451670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34271
I0819 19:05:08.810859  451670 main.go:141] libmachine: () Calling .GetVersion
I0819 19:05:08.811394  451670 main.go:141] libmachine: Using API Version  1
I0819 19:05:08.811420  451670 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 19:05:08.811774  451670 main.go:141] libmachine: () Calling .GetMachineName
I0819 19:05:08.811954  451670 main.go:141] libmachine: (functional-124593) Calling .DriverName
I0819 19:05:08.812194  451670 ssh_runner.go:195] Run: systemctl --version
I0819 19:05:08.812227  451670 main.go:141] libmachine: (functional-124593) Calling .GetSSHHostname
I0819 19:05:08.815622  451670 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined MAC address 52:54:00:8e:23:13 in network mk-functional-124593
I0819 19:05:08.816025  451670 main.go:141] libmachine: (functional-124593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:23:13", ip: ""} in network mk-functional-124593: {Iface:virbr1 ExpiryTime:2024-08-19 19:49:16 +0000 UTC Type:0 Mac:52:54:00:8e:23:13 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-124593 Clientid:01:52:54:00:8e:23:13}
I0819 19:05:08.816049  451670 main.go:141] libmachine: (functional-124593) DBG | domain functional-124593 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:23:13 in network mk-functional-124593
I0819 19:05:08.816236  451670 main.go:141] libmachine: (functional-124593) Calling .GetSSHPort
I0819 19:05:08.816424  451670 main.go:141] libmachine: (functional-124593) Calling .GetSSHKeyPath
I0819 19:05:08.816611  451670 main.go:141] libmachine: (functional-124593) Calling .GetSSHUsername
I0819 19:05:08.816754  451670 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/functional-124593/id_rsa Username:docker}
I0819 19:05:08.906714  451670 build_images.go:161] Building image from path: /tmp/build.1635554531.tar
I0819 19:05:08.906783  451670 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0819 19:05:08.917242  451670 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1635554531.tar
I0819 19:05:08.921605  451670 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1635554531.tar: stat -c "%s %y" /var/lib/minikube/build/build.1635554531.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1635554531.tar': No such file or directory
I0819 19:05:08.921645  451670 ssh_runner.go:362] scp /tmp/build.1635554531.tar --> /var/lib/minikube/build/build.1635554531.tar (3072 bytes)
I0819 19:05:08.949652  451670 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1635554531
I0819 19:05:08.964595  451670 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1635554531 -xf /var/lib/minikube/build/build.1635554531.tar
I0819 19:05:08.974880  451670 crio.go:315] Building image: /var/lib/minikube/build/build.1635554531
I0819 19:05:08.974966  451670 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-124593 /var/lib/minikube/build/build.1635554531 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0819 19:05:11.474605  451670 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-124593 /var/lib/minikube/build/build.1635554531 --cgroup-manager=cgroupfs: (2.499594819s)
I0819 19:05:11.474706  451670 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1635554531
I0819 19:05:11.501355  451670 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1635554531.tar
I0819 19:05:11.533798  451670 build_images.go:217] Built localhost/my-image:functional-124593 from /tmp/build.1635554531.tar
I0819 19:05:11.533849  451670 build_images.go:133] succeeded building to: functional-124593
I0819 19:05:11.533857  451670 build_images.go:134] failed building to: 
I0819 19:05:11.533886  451670 main.go:141] libmachine: Making call to close driver server
I0819 19:05:11.533898  451670 main.go:141] libmachine: (functional-124593) Calling .Close
I0819 19:05:11.534225  451670 main.go:141] libmachine: Successfully made call to close driver server
I0819 19:05:11.534249  451670 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 19:05:11.534258  451670 main.go:141] libmachine: Making call to close driver server
I0819 19:05:11.534266  451670 main.go:141] libmachine: (functional-124593) Calling .Close
I0819 19:05:11.534534  451670 main.go:141] libmachine: Successfully made call to close driver server
I0819 19:05:11.534549  451670 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 19:05:11.534551  451670 main.go:141] libmachine: (functional-124593) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.550810145s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-124593
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 image load --daemon kicbase/echo-server:functional-124593 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-124593 image load --daemon kicbase/echo-server:functional-124593 --alsologtostderr: (1.17179241s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-124593 /tmp/TestFunctionalparallelMountCmdspecific-port1424652350/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-124593 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (207.075045ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-124593 /tmp/TestFunctionalparallelMountCmdspecific-port1424652350/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-124593 /tmp/TestFunctionalparallelMountCmdspecific-port1424652350/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 image load --daemon kicbase/echo-server:functional-124593 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-124593
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 image load --daemon kicbase/echo-server:functional-124593 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-124593 image load --daemon kicbase/echo-server:functional-124593 --alsologtostderr: (3.387892678s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-124593 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1426555090/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-124593 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1426555090/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-124593 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1426555090/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-124593 ssh "findmnt -T" /mount1: exit status 1 (330.203312ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-124593 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-124593 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1426555090/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-124593 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1426555090/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-124593 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1426555090/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 image save kicbase/echo-server:functional-124593 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 image rm kicbase/echo-server:functional-124593 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-124593
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-124593 image save --daemon kicbase/echo-server:functional-124593 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-124593
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-124593
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-124593
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-124593
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-163902 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0819 19:06:19.468173  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-163902 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m18.577770536s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (199.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-163902 -- rollout status deployment/busybox: (3.33461124s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- exec busybox-7dff88458-4hqxq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- exec busybox-7dff88458-9zj57 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- exec busybox-7dff88458-vlrsr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- exec busybox-7dff88458-4hqxq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- exec busybox-7dff88458-9zj57 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- exec busybox-7dff88458-vlrsr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- exec busybox-7dff88458-4hqxq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- exec busybox-7dff88458-9zj57 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- exec busybox-7dff88458-vlrsr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- exec busybox-7dff88458-4hqxq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- exec busybox-7dff88458-4hqxq -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- exec busybox-7dff88458-9zj57 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- exec busybox-7dff88458-9zj57 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- exec busybox-7dff88458-vlrsr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-163902 -- exec busybox-7dff88458-vlrsr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-163902 -v=7 --alsologtostderr
E0819 19:09:38.961914  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:09:38.968329  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:09:38.979813  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:09:39.001308  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:09:39.042755  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:09:39.124274  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:09:39.285881  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:09:39.607791  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:09:40.249665  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:09:41.531222  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:09:44.093469  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:09:49.215788  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-163902 -v=7 --alsologtostderr: (55.105766568s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-163902 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp testdata/cp-test.txt ha-163902:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp ha-163902:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4137015802/001/cp-test_ha-163902.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp ha-163902:/home/docker/cp-test.txt ha-163902-m02:/home/docker/cp-test_ha-163902_ha-163902-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m02 "sudo cat /home/docker/cp-test_ha-163902_ha-163902-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp ha-163902:/home/docker/cp-test.txt ha-163902-m03:/home/docker/cp-test_ha-163902_ha-163902-m03.txt
E0819 19:09:56.397466  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m03 "sudo cat /home/docker/cp-test_ha-163902_ha-163902-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp ha-163902:/home/docker/cp-test.txt ha-163902-m04:/home/docker/cp-test_ha-163902_ha-163902-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m04 "sudo cat /home/docker/cp-test_ha-163902_ha-163902-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp testdata/cp-test.txt ha-163902-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp ha-163902-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4137015802/001/cp-test_ha-163902-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp ha-163902-m02:/home/docker/cp-test.txt ha-163902:/home/docker/cp-test_ha-163902-m02_ha-163902.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902 "sudo cat /home/docker/cp-test_ha-163902-m02_ha-163902.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp ha-163902-m02:/home/docker/cp-test.txt ha-163902-m03:/home/docker/cp-test_ha-163902-m02_ha-163902-m03.txt
E0819 19:09:59.457712  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m03 "sudo cat /home/docker/cp-test_ha-163902-m02_ha-163902-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp ha-163902-m02:/home/docker/cp-test.txt ha-163902-m04:/home/docker/cp-test_ha-163902-m02_ha-163902-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m04 "sudo cat /home/docker/cp-test_ha-163902-m02_ha-163902-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp testdata/cp-test.txt ha-163902-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp ha-163902-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4137015802/001/cp-test_ha-163902-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp ha-163902-m03:/home/docker/cp-test.txt ha-163902:/home/docker/cp-test_ha-163902-m03_ha-163902.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902 "sudo cat /home/docker/cp-test_ha-163902-m03_ha-163902.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp ha-163902-m03:/home/docker/cp-test.txt ha-163902-m02:/home/docker/cp-test_ha-163902-m03_ha-163902-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m02 "sudo cat /home/docker/cp-test_ha-163902-m03_ha-163902-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp ha-163902-m03:/home/docker/cp-test.txt ha-163902-m04:/home/docker/cp-test_ha-163902-m03_ha-163902-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m04 "sudo cat /home/docker/cp-test_ha-163902-m03_ha-163902-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp testdata/cp-test.txt ha-163902-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4137015802/001/cp-test_ha-163902-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt ha-163902:/home/docker/cp-test_ha-163902-m04_ha-163902.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902 "sudo cat /home/docker/cp-test_ha-163902-m04_ha-163902.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt ha-163902-m02:/home/docker/cp-test_ha-163902-m04_ha-163902-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m02 "sudo cat /home/docker/cp-test_ha-163902-m04_ha-163902-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 cp ha-163902-m04:/home/docker/cp-test.txt ha-163902-m03:/home/docker/cp-test_ha-163902-m04_ha-163902-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 ssh -n ha-163902-m03 "sudo cat /home/docker/cp-test_ha-163902-m04_ha-163902-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.483156823s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-163902 node delete m03 -v=7 --alsologtostderr: (16.015344361s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (460.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-163902 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0819 19:22:59.470291  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:24:38.962085  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:24:56.397696  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:26:02.025843  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:29:38.961150  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:29:56.398043  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-163902 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m39.469783812s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (460.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-163902 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-163902 --control-plane -v=7 --alsologtostderr: (1m12.611801981s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-163902 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.08s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-519089 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-519089 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (56.077663543s)
--- PASS: TestJSONOutput/start/Command (56.08s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-519089 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-519089 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-519089 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-519089 --output=json --user=testUser: (7.352367681s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-503771 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-503771 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.852071ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a53ecca3-33b1-437a-9cb2-1504c0d91b50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-503771] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"48b0f59c-e003-40f3-a97c-3de35e7ebf71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"9fc6ee53-9e6f-46d0-8a03-6e0dcd5d9127","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2517bb85-e9a8-4bea-9d1a-a1b4689a2aca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig"}}
	{"specversion":"1.0","id":"ccebea9d-7eb8-4099-b3c2-cd7fac64e334","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube"}}
	{"specversion":"1.0","id":"58be675b-d698-4fd1-b657-00f4c85181e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1f86989a-34f5-4214-8a2b-efd4818cfa9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"92e83115-05f3-4fa7-aa10-c7ddf660bb31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-503771" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-503771
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (86.92s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-662425 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-662425 --driver=kvm2  --container-runtime=crio: (40.411807569s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-665630 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-665630 --driver=kvm2  --container-runtime=crio: (43.575418964s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-662425
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-665630
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-665630" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-665630
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-665630: (1.049143194s)
helpers_test.go:175: Cleaning up "first-662425" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-662425
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-662425: (1.004673173s)
--- PASS: TestMinikubeProfile (86.92s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-280766 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0819 19:34:38.962304  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:34:56.398461  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-280766 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.670895446s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-280766 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-280766 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-295688 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-295688 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.127685925s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-295688 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-295688 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-280766 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-295688 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-295688 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-295688
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-295688: (1.280927698s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.11s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-295688
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-295688: (22.11162038s)
--- PASS: TestMountStart/serial/RestartStopped (23.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-295688 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-295688 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-548379 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-548379 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m50.287174824s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548379 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548379 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-548379 -- rollout status deployment/busybox: (3.712374172s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548379 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548379 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548379 -- exec busybox-7dff88458-bzhsh -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548379 -- exec busybox-7dff88458-h4978 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548379 -- exec busybox-7dff88458-bzhsh -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548379 -- exec busybox-7dff88458-h4978 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548379 -- exec busybox-7dff88458-bzhsh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548379 -- exec busybox-7dff88458-h4978 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548379 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548379 -- exec busybox-7dff88458-bzhsh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548379 -- exec busybox-7dff88458-bzhsh -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548379 -- exec busybox-7dff88458-h4978 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548379 -- exec busybox-7dff88458-h4978 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-548379 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-548379 -v 3 --alsologtostderr: (45.495572022s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.06s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-548379 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 cp testdata/cp-test.txt multinode-548379:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 ssh -n multinode-548379 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 cp multinode-548379:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3755015912/001/cp-test_multinode-548379.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 ssh -n multinode-548379 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 cp multinode-548379:/home/docker/cp-test.txt multinode-548379-m02:/home/docker/cp-test_multinode-548379_multinode-548379-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 ssh -n multinode-548379 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 ssh -n multinode-548379-m02 "sudo cat /home/docker/cp-test_multinode-548379_multinode-548379-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 cp multinode-548379:/home/docker/cp-test.txt multinode-548379-m03:/home/docker/cp-test_multinode-548379_multinode-548379-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 ssh -n multinode-548379 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 ssh -n multinode-548379-m03 "sudo cat /home/docker/cp-test_multinode-548379_multinode-548379-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 cp testdata/cp-test.txt multinode-548379-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 ssh -n multinode-548379-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 cp multinode-548379-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3755015912/001/cp-test_multinode-548379-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 ssh -n multinode-548379-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 cp multinode-548379-m02:/home/docker/cp-test.txt multinode-548379:/home/docker/cp-test_multinode-548379-m02_multinode-548379.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 ssh -n multinode-548379-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 ssh -n multinode-548379 "sudo cat /home/docker/cp-test_multinode-548379-m02_multinode-548379.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 cp multinode-548379-m02:/home/docker/cp-test.txt multinode-548379-m03:/home/docker/cp-test_multinode-548379-m02_multinode-548379-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 ssh -n multinode-548379-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 ssh -n multinode-548379-m03 "sudo cat /home/docker/cp-test_multinode-548379-m02_multinode-548379-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 cp testdata/cp-test.txt multinode-548379-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 ssh -n multinode-548379-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 cp multinode-548379-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3755015912/001/cp-test_multinode-548379-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 ssh -n multinode-548379-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 cp multinode-548379-m03:/home/docker/cp-test.txt multinode-548379:/home/docker/cp-test_multinode-548379-m03_multinode-548379.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 ssh -n multinode-548379-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 ssh -n multinode-548379 "sudo cat /home/docker/cp-test_multinode-548379-m03_multinode-548379.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 cp multinode-548379-m03:/home/docker/cp-test.txt multinode-548379-m02:/home/docker/cp-test_multinode-548379-m03_multinode-548379-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 ssh -n multinode-548379-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 ssh -n multinode-548379-m02 "sudo cat /home/docker/cp-test_multinode-548379-m03_multinode-548379-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-548379 node stop m03: (1.390202423s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-548379 status: exit status 7 (440.706978ms)

                                                
                                                
-- stdout --
	multinode-548379
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-548379-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-548379-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-548379 status --alsologtostderr: exit status 7 (443.234533ms)

                                                
                                                
-- stdout --
	multinode-548379
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-548379-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-548379-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:38:42.096173  470079 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:38:42.096314  470079 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:38:42.096326  470079 out.go:358] Setting ErrFile to fd 2...
	I0819 19:38:42.096332  470079 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:38:42.096542  470079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-430949/.minikube/bin
	I0819 19:38:42.096720  470079 out.go:352] Setting JSON to false
	I0819 19:38:42.096747  470079 mustload.go:65] Loading cluster: multinode-548379
	I0819 19:38:42.096862  470079 notify.go:220] Checking for updates...
	I0819 19:38:42.097250  470079 config.go:182] Loaded profile config "multinode-548379": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:38:42.097273  470079 status.go:255] checking status of multinode-548379 ...
	I0819 19:38:42.097867  470079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:38:42.097917  470079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:38:42.119049  470079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42107
	I0819 19:38:42.119547  470079 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:38:42.120262  470079 main.go:141] libmachine: Using API Version  1
	I0819 19:38:42.120291  470079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:38:42.120727  470079 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:38:42.120970  470079 main.go:141] libmachine: (multinode-548379) Calling .GetState
	I0819 19:38:42.122766  470079 status.go:330] multinode-548379 host status = "Running" (err=<nil>)
	I0819 19:38:42.122788  470079 host.go:66] Checking if "multinode-548379" exists ...
	I0819 19:38:42.123089  470079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:38:42.123125  470079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:38:42.139499  470079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36273
	I0819 19:38:42.139979  470079 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:38:42.140430  470079 main.go:141] libmachine: Using API Version  1
	I0819 19:38:42.140458  470079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:38:42.140888  470079 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:38:42.141166  470079 main.go:141] libmachine: (multinode-548379) Calling .GetIP
	I0819 19:38:42.144605  470079 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:38:42.145042  470079 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:38:42.145078  470079 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:38:42.145215  470079 host.go:66] Checking if "multinode-548379" exists ...
	I0819 19:38:42.145565  470079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:38:42.145606  470079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:38:42.162647  470079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0819 19:38:42.163167  470079 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:38:42.163696  470079 main.go:141] libmachine: Using API Version  1
	I0819 19:38:42.163732  470079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:38:42.164118  470079 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:38:42.164323  470079 main.go:141] libmachine: (multinode-548379) Calling .DriverName
	I0819 19:38:42.164581  470079 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:38:42.164607  470079 main.go:141] libmachine: (multinode-548379) Calling .GetSSHHostname
	I0819 19:38:42.167596  470079 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:38:42.167994  470079 main.go:141] libmachine: (multinode-548379) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:00:97", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:36:03 +0000 UTC Type:0 Mac:52:54:00:20:00:97 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-548379 Clientid:01:52:54:00:20:00:97}
	I0819 19:38:42.168029  470079 main.go:141] libmachine: (multinode-548379) DBG | domain multinode-548379 has defined IP address 192.168.39.35 and MAC address 52:54:00:20:00:97 in network mk-multinode-548379
	I0819 19:38:42.168266  470079 main.go:141] libmachine: (multinode-548379) Calling .GetSSHPort
	I0819 19:38:42.168509  470079 main.go:141] libmachine: (multinode-548379) Calling .GetSSHKeyPath
	I0819 19:38:42.168701  470079 main.go:141] libmachine: (multinode-548379) Calling .GetSSHUsername
	I0819 19:38:42.168849  470079 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/multinode-548379/id_rsa Username:docker}
	I0819 19:38:42.252885  470079 ssh_runner.go:195] Run: systemctl --version
	I0819 19:38:42.259774  470079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:38:42.275522  470079 kubeconfig.go:125] found "multinode-548379" server: "https://192.168.39.35:8443"
	I0819 19:38:42.275557  470079 api_server.go:166] Checking apiserver status ...
	I0819 19:38:42.275591  470079 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:38:42.290255  470079 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1075/cgroup
	W0819 19:38:42.300797  470079 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1075/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:38:42.300857  470079 ssh_runner.go:195] Run: ls
	I0819 19:38:42.305659  470079 api_server.go:253] Checking apiserver healthz at https://192.168.39.35:8443/healthz ...
	I0819 19:38:42.310421  470079 api_server.go:279] https://192.168.39.35:8443/healthz returned 200:
	ok
	I0819 19:38:42.310455  470079 status.go:422] multinode-548379 apiserver status = Running (err=<nil>)
	I0819 19:38:42.310465  470079 status.go:257] multinode-548379 status: &{Name:multinode-548379 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:38:42.310484  470079 status.go:255] checking status of multinode-548379-m02 ...
	I0819 19:38:42.310925  470079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:38:42.310960  470079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:38:42.327322  470079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39205
	I0819 19:38:42.327857  470079 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:38:42.328456  470079 main.go:141] libmachine: Using API Version  1
	I0819 19:38:42.328492  470079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:38:42.328837  470079 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:38:42.329039  470079 main.go:141] libmachine: (multinode-548379-m02) Calling .GetState
	I0819 19:38:42.331052  470079 status.go:330] multinode-548379-m02 host status = "Running" (err=<nil>)
	I0819 19:38:42.331080  470079 host.go:66] Checking if "multinode-548379-m02" exists ...
	I0819 19:38:42.331386  470079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:38:42.331420  470079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:38:42.347140  470079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I0819 19:38:42.347629  470079 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:38:42.348121  470079 main.go:141] libmachine: Using API Version  1
	I0819 19:38:42.348143  470079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:38:42.348472  470079 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:38:42.348699  470079 main.go:141] libmachine: (multinode-548379-m02) Calling .GetIP
	I0819 19:38:42.351327  470079 main.go:141] libmachine: (multinode-548379-m02) DBG | domain multinode-548379-m02 has defined MAC address 52:54:00:8a:db:d2 in network mk-multinode-548379
	I0819 19:38:42.351847  470079 main.go:141] libmachine: (multinode-548379-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:db:d2", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:37:07 +0000 UTC Type:0 Mac:52:54:00:8a:db:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-548379-m02 Clientid:01:52:54:00:8a:db:d2}
	I0819 19:38:42.351870  470079 main.go:141] libmachine: (multinode-548379-m02) DBG | domain multinode-548379-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:8a:db:d2 in network mk-multinode-548379
	I0819 19:38:42.352005  470079 host.go:66] Checking if "multinode-548379-m02" exists ...
	I0819 19:38:42.352425  470079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:38:42.352460  470079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:38:42.368078  470079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41233
	I0819 19:38:42.368545  470079 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:38:42.369066  470079 main.go:141] libmachine: Using API Version  1
	I0819 19:38:42.369087  470079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:38:42.369436  470079 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:38:42.369642  470079 main.go:141] libmachine: (multinode-548379-m02) Calling .DriverName
	I0819 19:38:42.369866  470079 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 19:38:42.369886  470079 main.go:141] libmachine: (multinode-548379-m02) Calling .GetSSHHostname
	I0819 19:38:42.372938  470079 main.go:141] libmachine: (multinode-548379-m02) DBG | domain multinode-548379-m02 has defined MAC address 52:54:00:8a:db:d2 in network mk-multinode-548379
	I0819 19:38:42.373435  470079 main.go:141] libmachine: (multinode-548379-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:db:d2", ip: ""} in network mk-multinode-548379: {Iface:virbr1 ExpiryTime:2024-08-19 20:37:07 +0000 UTC Type:0 Mac:52:54:00:8a:db:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-548379-m02 Clientid:01:52:54:00:8a:db:d2}
	I0819 19:38:42.373462  470079 main.go:141] libmachine: (multinode-548379-m02) DBG | domain multinode-548379-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:8a:db:d2 in network mk-multinode-548379
	I0819 19:38:42.373675  470079 main.go:141] libmachine: (multinode-548379-m02) Calling .GetSSHPort
	I0819 19:38:42.373883  470079 main.go:141] libmachine: (multinode-548379-m02) Calling .GetSSHKeyPath
	I0819 19:38:42.374107  470079 main.go:141] libmachine: (multinode-548379-m02) Calling .GetSSHUsername
	I0819 19:38:42.374273  470079 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-430949/.minikube/machines/multinode-548379-m02/id_rsa Username:docker}
	I0819 19:38:42.460211  470079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:38:42.473732  470079 status.go:257] multinode-548379-m02 status: &{Name:multinode-548379-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0819 19:38:42.473772  470079 status.go:255] checking status of multinode-548379-m03 ...
	I0819 19:38:42.474205  470079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:38:42.474244  470079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:38:42.490393  470079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0819 19:38:42.490886  470079 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:38:42.491649  470079 main.go:141] libmachine: Using API Version  1
	I0819 19:38:42.491668  470079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:38:42.492020  470079 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:38:42.492236  470079 main.go:141] libmachine: (multinode-548379-m03) Calling .GetState
	I0819 19:38:42.493798  470079 status.go:330] multinode-548379-m03 host status = "Stopped" (err=<nil>)
	I0819 19:38:42.493812  470079 status.go:343] host is not running, skipping remaining checks
	I0819 19:38:42.493818  470079 status.go:257] multinode-548379-m03 status: &{Name:multinode-548379-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-548379 node start m03 -v=7 --alsologtostderr: (37.571834743s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-548379 node delete m03: (1.728302815s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (182.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-548379 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0819 19:49:38.961229  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:49:56.398166  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-548379 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m1.943945242s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548379 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (182.51s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-548379
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-548379-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-548379-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (63.638227ms)

                                                
                                                
-- stdout --
	* [multinode-548379-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-548379-m02' is duplicated with machine name 'multinode-548379-m02' in profile 'multinode-548379'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-548379-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-548379-m03 --driver=kvm2  --container-runtime=crio: (39.478214571s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-548379
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-548379: exit status 80 (210.810637ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-548379 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-548379-m03 already exists in multinode-548379-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-548379-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.62s)

                                                
                                    
x
+
TestScheduledStopUnix (114.84s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-937884 --memory=2048 --driver=kvm2  --container-runtime=crio
E0819 19:54:38.961694  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-937884 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.205095911s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-937884 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-937884 -n scheduled-stop-937884
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-937884 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-937884 --cancel-scheduled
E0819 19:54:56.398417  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-937884 -n scheduled-stop-937884
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-937884
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-937884 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-937884
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-937884: exit status 7 (66.808765ms)

                                                
                                                
-- stdout --
	scheduled-stop-937884
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-937884 -n scheduled-stop-937884
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-937884 -n scheduled-stop-937884: exit status 7 (75.689299ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-937884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-937884
--- PASS: TestScheduledStopUnix (114.84s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (211.39s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2910303767 start -p running-upgrade-814149 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0819 19:56:19.474628  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2910303767 start -p running-upgrade-814149 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m14.531780463s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-814149 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-814149 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.109809889s)
helpers_test.go:175: Cleaning up "running-upgrade-814149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-814149
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-814149: (1.263101698s)
--- PASS: TestRunningBinaryUpgrade (211.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-803941 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-803941 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (77.689613ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-803941] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-430949/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-430949/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (90.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-803941 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-803941 --driver=kvm2  --container-runtime=crio: (1m30.291578038s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-803941 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (90.55s)

                                                
                                    
x
+
TestPause/serial/Start (84.93s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-232147 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-232147 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m24.929072806s)
--- PASS: TestPause/serial/Start (84.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (67.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-803941 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-803941 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m6.102793579s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-803941 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-803941 status -o json: exit status 2 (287.446661ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-803941","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-803941
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-803941: (1.212892701s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (67.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (34.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-803941 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-803941 --no-kubernetes --driver=kvm2  --container-runtime=crio: (34.266429967s)
--- PASS: TestNoKubernetes/serial/Start (34.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-803941 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-803941 "sudo systemctl is-active --quiet service kubelet": exit status 1 (201.505868ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (17.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.807388346s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.657897272s)
--- PASS: TestNoKubernetes/serial/ProfileList (17.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-803941
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-803941: (1.650769803s)
--- PASS: TestNoKubernetes/serial/Stop (1.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (60.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-803941 --driver=kvm2  --container-runtime=crio
E0819 19:59:38.960861  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/functional-124593/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:59:56.397384  438159 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-430949/.minikube/profiles/addons-966657/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-803941 --driver=kvm2  --container-runtime=crio: (1m0.679919313s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (60.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (109.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1417417861 start -p stopped-upgrade-627417 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1417417861 start -p stopped-upgrade-627417 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m6.252157136s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1417417861 -p stopped-upgrade-627417 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1417417861 -p stopped-upgrade-627417 stop: (2.152055927s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-627417 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-627417 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.031204787s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (109.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-803941 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-803941 "sudo systemctl is-active --quiet service kubelet": exit status 1 (204.780476ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-627417
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.90s)

                                                
                                    

Test skip (29/222)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard